Test Report: Docker_Linux_crio_arm64 21974

                    
                      4cf3e568bd19aa010164d0f2afa2e28844e6f351:2025-11-26:42526
                    
                

Test fail (40/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.8
35 TestAddons/parallel/Registry 16.58
36 TestAddons/parallel/RegistryCreds 0.48
37 TestAddons/parallel/Ingress 147.32
38 TestAddons/parallel/InspektorGadget 6.25
39 TestAddons/parallel/MetricsServer 5.37
41 TestAddons/parallel/CSI 54.77
42 TestAddons/parallel/Headlamp 4.43
43 TestAddons/parallel/CloudSpanner 5.26
44 TestAddons/parallel/LocalPath 11.1
45 TestAddons/parallel/NvidiaDevicePlugin 6.26
46 TestAddons/parallel/Yakd 6.26
97 TestFunctional/parallel/ServiceCmdConnect 603.92
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.96
140 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.87
141 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
142 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
153 TestFunctional/parallel/ServiceCmd/Format 0.39
154 TestFunctional/parallel/ServiceCmd/URL 0.44
177 TestMultiControlPlane/serial/RestartCluster 478.71
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 5.82
179 TestMultiControlPlane/serial/AddSecondaryNode 85.4
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 6.13
191 TestJSONOutput/pause/Command 2.43
197 TestJSONOutput/unpause/Command 1.96
282 TestPause/serial/Pause 7.55
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.47
304 TestStartStop/group/old-k8s-version/serial/Pause 6.34
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.3
317 TestStartStop/group/no-preload/serial/Pause 6.23
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.54
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.47
331 TestStartStop/group/embed-certs/serial/Pause 7.39
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.54
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 9.43
348 TestStartStop/group/newest-cni/serial/Pause 7.83
x
+
TestAddons/serial/Volcano (0.8s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable volcano --alsologtostderr -v=1: exit status 11 (794.960104ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:39:26.880163   10776 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:39:26.880881   10776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:26.880895   10776 out.go:374] Setting ErrFile to fd 2...
	I1126 19:39:26.880900   10776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:26.881210   10776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:39:26.881546   10776 mustload.go:66] Loading cluster: addons-152801
	I1126 19:39:26.881974   10776 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:26.881993   10776 addons.go:622] checking whether the cluster is paused
	I1126 19:39:26.882135   10776 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:26.882151   10776 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:39:26.882659   10776 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:39:26.918874   10776 ssh_runner.go:195] Run: systemctl --version
	I1126 19:39:26.918925   10776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:39:26.946056   10776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:39:27.056764   10776 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:39:27.056867   10776 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:39:27.089436   10776 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:39:27.089462   10776 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:39:27.089467   10776 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:39:27.089471   10776 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:39:27.089474   10776 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:39:27.089478   10776 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:39:27.089480   10776 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:39:27.089483   10776 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:39:27.089486   10776 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:39:27.089492   10776 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:39:27.089495   10776 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:39:27.089499   10776 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:39:27.089502   10776 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:39:27.089505   10776 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:39:27.089508   10776 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:39:27.089513   10776 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:39:27.089520   10776 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:39:27.089524   10776 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:39:27.089527   10776 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:39:27.089530   10776 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:39:27.089535   10776 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:39:27.089541   10776 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:39:27.089544   10776 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:39:27.089548   10776 cri.go:89] found id: ""
	I1126 19:39:27.089597   10776 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:39:27.105772   10776 out.go:203] 
	W1126 19:39:27.108795   10776 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:39:27.108823   10776 out.go:285] * 
	* 
	W1126 19:39:27.585406   10776 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:39:27.588372   10776 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.80s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.088346ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-scxrq" [bc7f6a37-ea49-4566-bd97-21f1047456d7] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005851522s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-sdxpt" [bf573c71-ee84-46f1-b932-717861ec5583] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.002994047s
addons_test.go:392: (dbg) Run:  kubectl --context addons-152801 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-152801 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-152801 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.047918971s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 ip
2025/11/26 19:39:53 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable registry --alsologtostderr -v=1: exit status 11 (252.254163ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:39:53.344804   11320 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:39:53.345032   11320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:53.345060   11320 out.go:374] Setting ErrFile to fd 2...
	I1126 19:39:53.345079   11320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:53.345366   11320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:39:53.345684   11320 mustload.go:66] Loading cluster: addons-152801
	I1126 19:39:53.346197   11320 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:53.346242   11320 addons.go:622] checking whether the cluster is paused
	I1126 19:39:53.346398   11320 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:53.346453   11320 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:39:53.347002   11320 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:39:53.365014   11320 ssh_runner.go:195] Run: systemctl --version
	I1126 19:39:53.365072   11320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:39:53.383398   11320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:39:53.484268   11320 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:39:53.484404   11320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:39:53.512436   11320 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:39:53.512459   11320 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:39:53.512471   11320 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:39:53.512476   11320 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:39:53.512488   11320 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:39:53.512492   11320 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:39:53.512496   11320 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:39:53.512499   11320 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:39:53.512502   11320 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:39:53.512509   11320 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:39:53.512513   11320 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:39:53.512516   11320 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:39:53.512518   11320 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:39:53.512522   11320 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:39:53.512529   11320 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:39:53.512534   11320 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:39:53.512537   11320 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:39:53.512540   11320 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:39:53.512543   11320 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:39:53.512546   11320 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:39:53.512551   11320 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:39:53.512565   11320 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:39:53.512568   11320 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:39:53.512571   11320 cri.go:89] found id: ""
	I1126 19:39:53.512623   11320 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:39:53.528256   11320 out.go:203] 
	W1126 19:39:53.531222   11320 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:39:53.531249   11320 out.go:285] * 
	* 
	W1126 19:39:53.536064   11320 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:39:53.538981   11320 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.58s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.983553ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-152801
addons_test.go:332: (dbg) Run:  kubectl --context addons-152801 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (260.31178ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:40:55.610577   13398 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:40:55.610791   13398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:40:55.610825   13398 out.go:374] Setting ErrFile to fd 2...
	I1126 19:40:55.610846   13398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:40:55.611114   13398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:40:55.611416   13398 mustload.go:66] Loading cluster: addons-152801
	I1126 19:40:55.611823   13398 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:40:55.611867   13398 addons.go:622] checking whether the cluster is paused
	I1126 19:40:55.612001   13398 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:40:55.612036   13398 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:40:55.612593   13398 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:40:55.630608   13398 ssh_runner.go:195] Run: systemctl --version
	I1126 19:40:55.630666   13398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:40:55.651901   13398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:40:55.756670   13398 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:40:55.756782   13398 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:40:55.795411   13398 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:40:55.795441   13398 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:40:55.795447   13398 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:40:55.795451   13398 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:40:55.795455   13398 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:40:55.795458   13398 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:40:55.795462   13398 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:40:55.795465   13398 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:40:55.795468   13398 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:40:55.795478   13398 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:40:55.795482   13398 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:40:55.795486   13398 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:40:55.795489   13398 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:40:55.795492   13398 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:40:55.795496   13398 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:40:55.795504   13398 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:40:55.795512   13398 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:40:55.795517   13398 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:40:55.795521   13398 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:40:55.795524   13398 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:40:55.795528   13398 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:40:55.795532   13398 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:40:55.795535   13398 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:40:55.795538   13398 cri.go:89] found id: ""
	I1126 19:40:55.795588   13398 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:40:55.810656   13398 out.go:203] 
	W1126 19:40:55.813713   13398 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:40:55.813736   13398 out.go:285] * 
	* 
	W1126 19:40:55.818766   13398 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:40:55.821767   13398 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-152801 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-152801 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-152801 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [bbaa9145-b9ab-436a-8653-8f7342857206] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [bbaa9145-b9ab-436a-8653-8f7342857206] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.00432441s
I1126 19:40:27.499906    4129 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.474494576s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-152801 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-152801
helpers_test.go:243: (dbg) docker inspect addons-152801:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae",
	        "Created": "2025-11-26T19:37:09.20678067Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5287,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T19:37:09.272629667Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae/hosts",
	        "LogPath": "/var/lib/docker/containers/3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae/3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae-json.log",
	        "Name": "/addons-152801",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-152801:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-152801",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae",
	                "LowerDir": "/var/lib/docker/overlay2/a388f63ff930544e473204efaaf20b3bd5bc52e2d648ced1b77967bf09bdd5bc-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a388f63ff930544e473204efaaf20b3bd5bc52e2d648ced1b77967bf09bdd5bc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a388f63ff930544e473204efaaf20b3bd5bc52e2d648ced1b77967bf09bdd5bc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a388f63ff930544e473204efaaf20b3bd5bc52e2d648ced1b77967bf09bdd5bc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-152801",
	                "Source": "/var/lib/docker/volumes/addons-152801/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-152801",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-152801",
	                "name.minikube.sigs.k8s.io": "addons-152801",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e584929d8dbb29efc932d6f088e2f19fb3e810e31669f8c94ce81e02c8703a76",
	            "SandboxKey": "/var/run/docker/netns/e584929d8dbb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-152801": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:06:28:d3:80:4b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "791264f8919751140113e621337947c00a2209ef659bb8a64a18b76705940d76",
	                    "EndpointID": "cc076c0fd6f8620c858df9b21ee74d7fe98ec959e15d20ce2fd4a668cba9060c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-152801",
	                        "3f8d1177ed55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-152801 -n addons-152801
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-152801 logs -n 25: (1.484830639s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-938641                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-938641 │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:36 UTC │
	│ start   │ --download-only -p binary-mirror-453571 --alsologtostderr --binary-mirror http://127.0.0.1:34029 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-453571   │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │                     │
	│ delete  │ -p binary-mirror-453571                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-453571   │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:36 UTC │
	│ addons  │ enable dashboard -p addons-152801                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │                     │
	│ addons  │ disable dashboard -p addons-152801                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │                     │
	│ start   │ -p addons-152801 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:39 UTC │
	│ addons  │ addons-152801 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ addons  │ addons-152801 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ addons  │ addons-152801 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ addons  │ addons-152801 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ ip      │ addons-152801 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │ 26 Nov 25 19:39 UTC │
	│ addons  │ addons-152801 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ addons  │ addons-152801 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ addons  │ enable headlamp -p addons-152801 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ ssh     │ addons-152801 ssh cat /opt/local-path-provisioner/pvc-6c7297e5-0e4c-403d-b89a-2e241166a087_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │ 26 Nov 25 19:39 UTC │
	│ addons  │ addons-152801 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ addons  │ addons-152801 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:40 UTC │                     │
	│ addons  │ addons-152801 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:40 UTC │                     │
	│ addons  │ addons-152801 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:40 UTC │                     │
	│ ssh     │ addons-152801 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:40 UTC │                     │
	│ addons  │ addons-152801 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:40 UTC │                     │
	│ addons  │ addons-152801 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:40 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-152801                                                                                                                                                                                                                                                                                                                                                                                           │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:40 UTC │ 26 Nov 25 19:40 UTC │
	│ addons  │ addons-152801 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:40 UTC │                     │
	│ ip      │ addons-152801 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:42 UTC │ 26 Nov 25 19:42 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:36:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:36:43.471931    4888 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:36:43.472045    4888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:36:43.472056    4888 out.go:374] Setting ErrFile to fd 2...
	I1126 19:36:43.472062    4888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:36:43.472303    4888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:36:43.472724    4888 out.go:368] Setting JSON to false
	I1126 19:36:43.473416    4888 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1134,"bootTime":1764184670,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 19:36:43.473479    4888 start.go:143] virtualization:  
	I1126 19:36:43.475110    4888 out.go:179] * [addons-152801] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 19:36:43.476472    4888 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:36:43.476563    4888 notify.go:221] Checking for updates...
	I1126 19:36:43.479166    4888 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:36:43.480543    4888 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 19:36:43.481717    4888 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 19:36:43.482826    4888 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 19:36:43.484055    4888 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:36:43.485460    4888 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:36:43.506176    4888 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 19:36:43.506308    4888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:36:43.568905    4888 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-26 19:36:43.559447899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 19:36:43.569006    4888 docker.go:319] overlay module found
	I1126 19:36:43.570400    4888 out.go:179] * Using the docker driver based on user configuration
	I1126 19:36:43.571643    4888 start.go:309] selected driver: docker
	I1126 19:36:43.571666    4888 start.go:927] validating driver "docker" against <nil>
	I1126 19:36:43.571679    4888 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:36:43.572421    4888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:36:43.622770    4888 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-26 19:36:43.614433972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 19:36:43.622928    4888 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 19:36:43.623140    4888 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 19:36:43.624633    4888 out.go:179] * Using Docker driver with root privileges
	I1126 19:36:43.625909    4888 cni.go:84] Creating CNI manager for ""
	I1126 19:36:43.626005    4888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:36:43.626013    4888 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 19:36:43.626091    4888 start.go:353] cluster config:
	{Name:addons-152801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-152801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1126 19:36:43.628484    4888 out.go:179] * Starting "addons-152801" primary control-plane node in "addons-152801" cluster
	I1126 19:36:43.629728    4888 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 19:36:43.631057    4888 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 19:36:43.632380    4888 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:36:43.632420    4888 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 19:36:43.632432    4888 cache.go:65] Caching tarball of preloaded images
	I1126 19:36:43.632452    4888 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 19:36:43.632524    4888 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 19:36:43.632535    4888 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 19:36:43.632884    4888 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/config.json ...
	I1126 19:36:43.632938    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/config.json: {Name:mk5d289ab55aa4f11a8101e03a097106e1da928c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:36:43.648105    4888 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1126 19:36:43.648225    4888 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1126 19:36:43.648249    4888 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1126 19:36:43.648255    4888 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1126 19:36:43.648262    4888 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1126 19:36:43.648267    4888 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1126 19:37:01.543984    4888 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1126 19:37:01.544021    4888 cache.go:243] Successfully downloaded all kic artifacts
	I1126 19:37:01.544057    4888 start.go:360] acquireMachinesLock for addons-152801: {Name:mk24b9e69899438b99e9d16cbbe183077c32e652 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 19:37:01.544188    4888 start.go:364] duration metric: took 104.529µs to acquireMachinesLock for "addons-152801"
	I1126 19:37:01.544215    4888 start.go:93] Provisioning new machine with config: &{Name:addons-152801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-152801 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 19:37:01.544301    4888 start.go:125] createHost starting for "" (driver="docker")
	I1126 19:37:01.547652    4888 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1126 19:37:01.547886    4888 start.go:159] libmachine.API.Create for "addons-152801" (driver="docker")
	I1126 19:37:01.547920    4888 client.go:173] LocalClient.Create starting
	I1126 19:37:01.548027    4888 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem
	I1126 19:37:01.891695    4888 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem
	I1126 19:37:02.208986    4888 cli_runner.go:164] Run: docker network inspect addons-152801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 19:37:02.226313    4888 cli_runner.go:211] docker network inspect addons-152801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 19:37:02.226398    4888 network_create.go:284] running [docker network inspect addons-152801] to gather additional debugging logs...
	I1126 19:37:02.226420    4888 cli_runner.go:164] Run: docker network inspect addons-152801
	W1126 19:37:02.241968    4888 cli_runner.go:211] docker network inspect addons-152801 returned with exit code 1
	I1126 19:37:02.241999    4888 network_create.go:287] error running [docker network inspect addons-152801]: docker network inspect addons-152801: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-152801 not found
	I1126 19:37:02.242013    4888 network_create.go:289] output of [docker network inspect addons-152801]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-152801 not found
	
	** /stderr **
	I1126 19:37:02.242146    4888 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 19:37:02.258644    4888 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b3f430}
	I1126 19:37:02.258691    4888 network_create.go:124] attempt to create docker network addons-152801 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1126 19:37:02.258793    4888 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-152801 addons-152801
	I1126 19:37:02.325280    4888 network_create.go:108] docker network addons-152801 192.168.49.0/24 created
	I1126 19:37:02.325311    4888 kic.go:121] calculated static IP "192.168.49.2" for the "addons-152801" container
	I1126 19:37:02.325390    4888 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 19:37:02.341875    4888 cli_runner.go:164] Run: docker volume create addons-152801 --label name.minikube.sigs.k8s.io=addons-152801 --label created_by.minikube.sigs.k8s.io=true
	I1126 19:37:02.360001    4888 oci.go:103] Successfully created a docker volume addons-152801
	I1126 19:37:02.360100    4888 cli_runner.go:164] Run: docker run --rm --name addons-152801-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-152801 --entrypoint /usr/bin/test -v addons-152801:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 19:37:04.665797    4888 cli_runner.go:217] Completed: docker run --rm --name addons-152801-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-152801 --entrypoint /usr/bin/test -v addons-152801:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (2.305657124s)
	I1126 19:37:04.665825    4888 oci.go:107] Successfully prepared a docker volume addons-152801
	I1126 19:37:04.665872    4888 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:37:04.665889    4888 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 19:37:04.665989    4888 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-152801:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 19:37:09.127055    4888 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-152801:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.461019169s)
	I1126 19:37:09.127089    4888 kic.go:203] duration metric: took 4.461197008s to extract preloaded images to volume ...
	W1126 19:37:09.127232    4888 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1126 19:37:09.127348    4888 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 19:37:09.192533    4888 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-152801 --name addons-152801 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-152801 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-152801 --network addons-152801 --ip 192.168.49.2 --volume addons-152801:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 19:37:09.523927    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Running}}
	I1126 19:37:09.549914    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:09.576481    4888 cli_runner.go:164] Run: docker exec addons-152801 stat /var/lib/dpkg/alternatives/iptables
	I1126 19:37:09.624082    4888 oci.go:144] the created container "addons-152801" has a running status.
	I1126 19:37:09.624122    4888 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa...
	I1126 19:37:09.906846    4888 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 19:37:09.939699    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:09.968877    4888 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 19:37:09.968898    4888 kic_runner.go:114] Args: [docker exec --privileged addons-152801 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 19:37:10.018076    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:10.036995    4888 machine.go:94] provisionDockerMachine start ...
	I1126 19:37:10.037093    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:10.055559    4888 main.go:143] libmachine: Using SSH client type: native
	I1126 19:37:10.055886    4888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:37:10.055901    4888 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 19:37:10.056661    4888 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1126 19:37:13.201186    4888 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-152801
	
	I1126 19:37:13.201209    4888 ubuntu.go:182] provisioning hostname "addons-152801"
	I1126 19:37:13.201271    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:13.218714    4888 main.go:143] libmachine: Using SSH client type: native
	I1126 19:37:13.219026    4888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:37:13.219046    4888 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-152801 && echo "addons-152801" | sudo tee /etc/hostname
	I1126 19:37:13.375843    4888 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-152801
	
	I1126 19:37:13.375947    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:13.394468    4888 main.go:143] libmachine: Using SSH client type: native
	I1126 19:37:13.394779    4888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:37:13.394801    4888 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-152801' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-152801/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-152801' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 19:37:13.542056    4888 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 19:37:13.542077    4888 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 19:37:13.542104    4888 ubuntu.go:190] setting up certificates
	I1126 19:37:13.542124    4888 provision.go:84] configureAuth start
	I1126 19:37:13.542180    4888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-152801
	I1126 19:37:13.558885    4888 provision.go:143] copyHostCerts
	I1126 19:37:13.558967    4888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 19:37:13.559087    4888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 19:37:13.559150    4888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 19:37:13.559224    4888 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.addons-152801 san=[127.0.0.1 192.168.49.2 addons-152801 localhost minikube]
	I1126 19:37:13.623176    4888 provision.go:177] copyRemoteCerts
	I1126 19:37:13.623240    4888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 19:37:13.623317    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:13.639937    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:13.745315    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 19:37:13.761902    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 19:37:13.780087    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 19:37:13.797027    4888 provision.go:87] duration metric: took 254.879005ms to configureAuth
	I1126 19:37:13.797052    4888 ubuntu.go:206] setting minikube options for container-runtime
	I1126 19:37:13.797236    4888 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:13.797346    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:13.814456    4888 main.go:143] libmachine: Using SSH client type: native
	I1126 19:37:13.814773    4888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:37:13.814791    4888 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 19:37:14.113275    4888 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 19:37:14.113298    4888 machine.go:97] duration metric: took 4.076284458s to provisionDockerMachine
	I1126 19:37:14.113310    4888 client.go:176] duration metric: took 12.565383136s to LocalClient.Create
	I1126 19:37:14.113349    4888 start.go:167] duration metric: took 12.565464018s to libmachine.API.Create "addons-152801"
	I1126 19:37:14.113361    4888 start.go:293] postStartSetup for "addons-152801" (driver="docker")
	I1126 19:37:14.113371    4888 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 19:37:14.113449    4888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 19:37:14.113495    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:14.131658    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:14.238902    4888 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 19:37:14.242126    4888 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 19:37:14.242158    4888 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 19:37:14.242171    4888 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 19:37:14.242281    4888 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 19:37:14.242312    4888 start.go:296] duration metric: took 128.945546ms for postStartSetup
	I1126 19:37:14.242628    4888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-152801
	I1126 19:37:14.258948    4888 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/config.json ...
	I1126 19:37:14.259217    4888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 19:37:14.259264    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:14.275073    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:14.374768    4888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 19:37:14.379280    4888 start.go:128] duration metric: took 12.834962865s to createHost
	I1126 19:37:14.379307    4888 start.go:83] releasing machines lock for "addons-152801", held for 12.835109558s
	I1126 19:37:14.379379    4888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-152801
	I1126 19:37:14.395842    4888 ssh_runner.go:195] Run: cat /version.json
	I1126 19:37:14.395901    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:14.396157    4888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 19:37:14.396214    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:14.414912    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:14.414961    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:14.601635    4888 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:14.607928    4888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 19:37:14.641938    4888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 19:37:14.645980    4888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 19:37:14.646049    4888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 19:37:14.673506    4888 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1126 19:37:14.673533    4888 start.go:496] detecting cgroup driver to use...
	I1126 19:37:14.673563    4888 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 19:37:14.673613    4888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 19:37:14.691340    4888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 19:37:14.703906    4888 docker.go:218] disabling cri-docker service (if available) ...
	I1126 19:37:14.703966    4888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 19:37:14.721107    4888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 19:37:14.738559    4888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 19:37:14.853514    4888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 19:37:14.981371    4888 docker.go:234] disabling docker service ...
	I1126 19:37:14.981476    4888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 19:37:15.002351    4888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 19:37:15.015278    4888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 19:37:15.138326    4888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 19:37:15.272598    4888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 19:37:15.285197    4888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 19:37:15.299364    4888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 19:37:15.299499    4888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.307883    4888 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 19:37:15.307953    4888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.316089    4888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.324178    4888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.332437    4888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 19:37:15.339913    4888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.348150    4888 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.360471    4888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.369604    4888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 19:37:15.376487    4888 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1126 19:37:15.376569    4888 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1126 19:37:15.389889    4888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 19:37:15.397214    4888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:37:15.508203    4888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 19:37:15.681179    4888 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 19:37:15.681258    4888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 19:37:15.684749    4888 start.go:564] Will wait 60s for crictl version
	I1126 19:37:15.684809    4888 ssh_runner.go:195] Run: which crictl
	I1126 19:37:15.688029    4888 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 19:37:15.711675    4888 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 19:37:15.711865    4888 ssh_runner.go:195] Run: crio --version
	I1126 19:37:15.739135    4888 ssh_runner.go:195] Run: crio --version
	I1126 19:37:15.770939    4888 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 19:37:15.773829    4888 cli_runner.go:164] Run: docker network inspect addons-152801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 19:37:15.790896    4888 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 19:37:15.794677    4888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 19:37:15.804137    4888 kubeadm.go:884] updating cluster {Name:addons-152801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-152801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 19:37:15.804266    4888 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:37:15.804324    4888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 19:37:15.835636    4888 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 19:37:15.835659    4888 crio.go:433] Images already preloaded, skipping extraction
	I1126 19:37:15.835711    4888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 19:37:15.860167    4888 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 19:37:15.860187    4888 cache_images.go:86] Images are preloaded, skipping loading
	I1126 19:37:15.860194    4888 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1126 19:37:15.860279    4888 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-152801 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-152801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 19:37:15.860354    4888 ssh_runner.go:195] Run: crio config
	I1126 19:37:15.919721    4888 cni.go:84] Creating CNI manager for ""
	I1126 19:37:15.919744    4888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:37:15.919760    4888 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 19:37:15.919782    4888 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-152801 NodeName:addons-152801 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 19:37:15.919901    4888 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-152801"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 19:37:15.919972    4888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 19:37:15.927274    4888 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 19:37:15.927384    4888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 19:37:15.934603    4888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1126 19:37:15.946719    4888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 19:37:15.959891    4888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1126 19:37:15.973373    4888 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1126 19:37:15.976889    4888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 19:37:15.985957    4888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:37:16.104046    4888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 19:37:16.121408    4888 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801 for IP: 192.168.49.2
	I1126 19:37:16.121471    4888 certs.go:195] generating shared ca certs ...
	I1126 19:37:16.121502    4888 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.121672    4888 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 19:37:16.336218    4888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt ...
	I1126 19:37:16.336253    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt: {Name:mk1b923187d4898357dbd217efb8f9b56f4fbed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.336456    4888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key ...
	I1126 19:37:16.336469    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key: {Name:mk0788bd3c53229948f8b98862d3eac560ece077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.336558    4888 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 19:37:16.796660    4888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt ...
	I1126 19:37:16.796694    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt: {Name:mkde51b7eb553204dc595950bd053b1cf1ad5c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.796926    4888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key ...
	I1126 19:37:16.796941    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key: {Name:mk07e2e19752c685127490fe5215034231ad2787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.797020    4888 certs.go:257] generating profile certs ...
	I1126 19:37:16.797083    4888 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.key
	I1126 19:37:16.797101    4888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt with IP's: []
	I1126 19:37:16.858565    4888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt ...
	I1126 19:37:16.858589    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: {Name:mk0325bae6f46d1e86b77469f940616a7bd8ec12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.858757    4888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.key ...
	I1126 19:37:16.858768    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.key: {Name:mk3a1a6a6babfaa19e586c3fd90f05ff1f5f860f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.858848    4888 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.key.2818624a
	I1126 19:37:16.858871    4888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.crt.2818624a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1126 19:37:17.141299    4888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.crt.2818624a ...
	I1126 19:37:17.141327    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.crt.2818624a: {Name:mkd7e08b835ca007230c0f777379c969a78ac7ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:17.141515    4888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.key.2818624a ...
	I1126 19:37:17.141532    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.key.2818624a: {Name:mkc6d9c0146a117e15524692f93872472975ca75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:17.141614    4888 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.crt.2818624a -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.crt
	I1126 19:37:17.141696    4888 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.key.2818624a -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.key
	I1126 19:37:17.141751    4888 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.key
	I1126 19:37:17.141770    4888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.crt with IP's: []
	I1126 19:37:17.366071    4888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.crt ...
	I1126 19:37:17.366100    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.crt: {Name:mkcad830facb8aebfe64c6768d11d47b8b95fd38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:17.366271    4888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.key ...
	I1126 19:37:17.366283    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.key: {Name:mkf21b255d15ba02ea5b7a6b68ab2574110a3e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:17.366467    4888 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 19:37:17.366510    4888 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 19:37:17.366541    4888 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 19:37:17.366571    4888 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 19:37:17.367112    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 19:37:17.385441    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 19:37:17.403032    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 19:37:17.421075    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 19:37:17.438407    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 19:37:17.454859    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 19:37:17.471482    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 19:37:17.488199    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 19:37:17.504760    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 19:37:17.521628    4888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 19:37:17.534136    4888 ssh_runner.go:195] Run: openssl version
	I1126 19:37:17.540119    4888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 19:37:17.547979    4888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:37:17.551354    4888 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:37:17.551417    4888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:37:17.592059    4888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 19:37:17.600110    4888 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 19:37:17.603480    4888 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 19:37:17.603531    4888 kubeadm.go:401] StartCluster: {Name:addons-152801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-152801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:37:17.603615    4888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:17.603671    4888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:17.635728    4888 cri.go:89] found id: ""
	I1126 19:37:17.635793    4888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 19:37:17.643261    4888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 19:37:17.650576    4888 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 19:37:17.650641    4888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 19:37:17.658169    4888 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 19:37:17.658189    4888 kubeadm.go:158] found existing configuration files:
	
	I1126 19:37:17.658258    4888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 19:37:17.665505    4888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 19:37:17.665576    4888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 19:37:17.672672    4888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 19:37:17.680057    4888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 19:37:17.680118    4888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 19:37:17.686868    4888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 19:37:17.693834    4888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 19:37:17.693905    4888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 19:37:17.700747    4888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 19:37:17.707752    4888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 19:37:17.707822    4888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 19:37:17.714851    4888 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 19:37:17.766126    4888 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 19:37:17.766538    4888 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 19:37:17.790376    4888 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 19:37:17.790450    4888 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1126 19:37:17.790491    4888 kubeadm.go:319] OS: Linux
	I1126 19:37:17.790543    4888 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 19:37:17.790596    4888 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1126 19:37:17.790646    4888 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 19:37:17.790698    4888 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 19:37:17.790750    4888 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 19:37:17.790801    4888 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 19:37:17.790850    4888 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 19:37:17.790903    4888 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 19:37:17.790953    4888 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1126 19:37:17.860525    4888 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 19:37:17.860689    4888 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 19:37:17.860840    4888 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 19:37:17.869078    4888 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 19:37:17.876080    4888 out.go:252]   - Generating certificates and keys ...
	I1126 19:37:17.876180    4888 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 19:37:17.876252    4888 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 19:37:18.318969    4888 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 19:37:18.638932    4888 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 19:37:18.907767    4888 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 19:37:19.026106    4888 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 19:37:19.296349    4888 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 19:37:19.296717    4888 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-152801 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1126 19:37:19.814329    4888 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 19:37:19.814680    4888 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-152801 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1126 19:37:20.288255    4888 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 19:37:20.464183    4888 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 19:37:20.714280    4888 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 19:37:20.714352    4888 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 19:37:20.890864    4888 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 19:37:21.408788    4888 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 19:37:21.814596    4888 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 19:37:22.334456    4888 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 19:37:22.678326    4888 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 19:37:22.678867    4888 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 19:37:22.681428    4888 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 19:37:22.685377    4888 out.go:252]   - Booting up control plane ...
	I1126 19:37:22.685482    4888 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 19:37:22.685568    4888 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 19:37:22.685642    4888 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 19:37:22.699735    4888 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 19:37:22.699953    4888 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 19:37:22.707894    4888 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 19:37:22.708471    4888 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 19:37:22.708607    4888 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 19:37:22.842388    4888 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 19:37:22.842524    4888 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 19:37:24.342631    4888 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501920336s
	I1126 19:37:24.346154    4888 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 19:37:24.346252    4888 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1126 19:37:24.346506    4888 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 19:37:24.346597    4888 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 19:37:27.377271    4888 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.030615825s
	I1126 19:37:28.528883    4888 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.182731145s
	I1126 19:37:30.347606    4888 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001313133s
	I1126 19:37:30.366979    4888 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 19:37:30.382229    4888 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 19:37:30.396942    4888 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 19:37:30.397165    4888 kubeadm.go:319] [mark-control-plane] Marking the node addons-152801 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 19:37:30.409045    4888 kubeadm.go:319] [bootstrap-token] Using token: 9vmpoi.nosh8iympne0717j
	I1126 19:37:30.412163    4888 out.go:252]   - Configuring RBAC rules ...
	I1126 19:37:30.412293    4888 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 19:37:30.418167    4888 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 19:37:30.427501    4888 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 19:37:30.439590    4888 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 19:37:30.445405    4888 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 19:37:30.450693    4888 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 19:37:30.754483    4888 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 19:37:31.185033    4888 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 19:37:31.756620    4888 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 19:37:31.757674    4888 kubeadm.go:319] 
	I1126 19:37:31.757747    4888 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 19:37:31.757753    4888 kubeadm.go:319] 
	I1126 19:37:31.757830    4888 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 19:37:31.757837    4888 kubeadm.go:319] 
	I1126 19:37:31.757863    4888 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 19:37:31.757942    4888 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 19:37:31.757994    4888 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 19:37:31.757998    4888 kubeadm.go:319] 
	I1126 19:37:31.758052    4888 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 19:37:31.758056    4888 kubeadm.go:319] 
	I1126 19:37:31.758105    4888 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 19:37:31.758113    4888 kubeadm.go:319] 
	I1126 19:37:31.758165    4888 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 19:37:31.758240    4888 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 19:37:31.758308    4888 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 19:37:31.758312    4888 kubeadm.go:319] 
	I1126 19:37:31.758396    4888 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 19:37:31.758472    4888 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 19:37:31.758477    4888 kubeadm.go:319] 
	I1126 19:37:31.758560    4888 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9vmpoi.nosh8iympne0717j \
	I1126 19:37:31.758663    4888 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a \
	I1126 19:37:31.758683    4888 kubeadm.go:319] 	--control-plane 
	I1126 19:37:31.758687    4888 kubeadm.go:319] 
	I1126 19:37:31.758771    4888 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 19:37:31.758775    4888 kubeadm.go:319] 
	I1126 19:37:31.758857    4888 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9vmpoi.nosh8iympne0717j \
	I1126 19:37:31.758959    4888 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a 
	I1126 19:37:31.761445    4888 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1126 19:37:31.761690    4888 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1126 19:37:31.761821    4888 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 19:37:31.761854    4888 cni.go:84] Creating CNI manager for ""
	I1126 19:37:31.761862    4888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:37:31.766852    4888 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1126 19:37:31.770528    4888 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 19:37:31.774831    4888 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 19:37:31.774851    4888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 19:37:31.788486    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 19:37:32.071130    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:32.071229    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-152801 minikube.k8s.io/updated_at=2025_11_26T19_37_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=addons-152801 minikube.k8s.io/primary=true
	I1126 19:37:32.071281    4888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 19:37:32.221847    4888 ops.go:34] apiserver oom_adj: -16
	I1126 19:37:32.221993    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:32.722632    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:33.222387    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:33.722900    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:34.222963    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:34.722530    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:35.223023    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:35.722671    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:36.222628    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:36.722043    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:36.889307    4888 kubeadm.go:1114] duration metric: took 4.818233192s to wait for elevateKubeSystemPrivileges
	I1126 19:37:36.889334    4888 kubeadm.go:403] duration metric: took 19.285808303s to StartCluster
	I1126 19:37:36.889351    4888 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:36.889470    4888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 19:37:36.889793    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:36.889998    4888 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 19:37:36.890169    4888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 19:37:36.890305    4888 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1126 19:37:36.890401    4888 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:36.890405    4888 addons.go:70] Setting yakd=true in profile "addons-152801"
	I1126 19:37:36.890427    4888 addons.go:239] Setting addon yakd=true in "addons-152801"
	I1126 19:37:36.890435    4888 addons.go:70] Setting inspektor-gadget=true in profile "addons-152801"
	I1126 19:37:36.890445    4888 addons.go:239] Setting addon inspektor-gadget=true in "addons-152801"
	I1126 19:37:36.890451    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.890463    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.890915    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.890962    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.891405    4888 addons.go:70] Setting metrics-server=true in profile "addons-152801"
	I1126 19:37:36.891430    4888 addons.go:239] Setting addon metrics-server=true in "addons-152801"
	I1126 19:37:36.891454    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.891858    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.892055    4888 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-152801"
	I1126 19:37:36.892078    4888 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-152801"
	I1126 19:37:36.892100    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.892492    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.896068    4888 addons.go:70] Setting cloud-spanner=true in profile "addons-152801"
	I1126 19:37:36.896095    4888 addons.go:239] Setting addon cloud-spanner=true in "addons-152801"
	I1126 19:37:36.896124    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.896650    4888 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-152801"
	I1126 19:37:36.896697    4888 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-152801"
	I1126 19:37:36.896725    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.897131    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.898229    4888 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-152801"
	I1126 19:37:36.898447    4888 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-152801"
	I1126 19:37:36.898497    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.898970    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.903038    4888 addons.go:70] Setting default-storageclass=true in profile "addons-152801"
	I1126 19:37:36.906366    4888 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-152801"
	I1126 19:37:36.906815    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.898354    4888 addons.go:70] Setting registry-creds=true in profile "addons-152801"
	I1126 19:37:36.911379    4888 addons.go:239] Setting addon registry-creds=true in "addons-152801"
	I1126 19:37:36.911456    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.898361    4888 addons.go:70] Setting storage-provisioner=true in profile "addons-152801"
	I1126 19:37:36.927544    4888 addons.go:239] Setting addon storage-provisioner=true in "addons-152801"
	I1126 19:37:36.930062    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.930660    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.935209    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.898368    4888 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-152801"
	I1126 19:37:36.948592    4888 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-152801"
	I1126 19:37:36.949035    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.898374    4888 addons.go:70] Setting volcano=true in profile "addons-152801"
	I1126 19:37:36.990622    4888 addons.go:239] Setting addon volcano=true in "addons-152801"
	I1126 19:37:36.990666    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.991133    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.898379    4888 addons.go:70] Setting volumesnapshots=true in profile "addons-152801"
	I1126 19:37:37.004611    4888 addons.go:239] Setting addon volumesnapshots=true in "addons-152801"
	I1126 19:37:37.004663    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.005127    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:37.007606    4888 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1126 19:37:37.011140    4888 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1126 19:37:37.011212    4888 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1126 19:37:37.011304    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:36.903054    4888 addons.go:70] Setting gcp-auth=true in profile "addons-152801"
	I1126 19:37:37.015505    4888 mustload.go:66] Loading cluster: addons-152801
	I1126 19:37:37.015704    4888 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:37.015975    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.903063    4888 addons.go:70] Setting ingress=true in profile "addons-152801"
	I1126 19:37:37.044761    4888 addons.go:239] Setting addon ingress=true in "addons-152801"
	I1126 19:37:37.044866    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.045465    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:37.061559    4888 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1126 19:37:37.063699    4888 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1126 19:37:37.065751    4888 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1126 19:37:37.065771    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1126 19:37:37.065864    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:36.903070    4888 addons.go:70] Setting ingress-dns=true in profile "addons-152801"
	I1126 19:37:37.067930    4888 addons.go:239] Setting addon ingress-dns=true in "addons-152801"
	I1126 19:37:37.067977    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.068439    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.898343    4888 addons.go:70] Setting registry=true in profile "addons-152801"
	I1126 19:37:37.091209    4888 addons.go:239] Setting addon registry=true in "addons-152801"
	I1126 19:37:37.091249    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.091713    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:37.100722    4888 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1126 19:37:37.100742    4888 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1126 19:37:37.100811    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:36.905907    4888 out.go:179] * Verifying Kubernetes components...
	I1126 19:37:37.126106    4888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:37:36.927374    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:37.142534    4888 addons.go:239] Setting addon default-storageclass=true in "addons-152801"
	I1126 19:37:37.142571    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.143117    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:37.183125    4888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 19:37:37.183617    4888 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1126 19:37:37.194048    4888 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1126 19:37:37.194070    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1126 19:37:37.198923    4888 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1126 19:37:37.201252    4888 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 19:37:37.201759    4888 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1126 19:37:37.201774    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1126 19:37:37.201857    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.194130    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.194139    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1126 19:37:37.230653    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1126 19:37:37.233671    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1126 19:37:37.234808    4888 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1126 19:37:37.235453    4888 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 19:37:37.235466    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 19:37:37.235531    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.247513    4888 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1126 19:37:37.247532    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1126 19:37:37.247592    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	W1126 19:37:37.257632    4888 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1126 19:37:37.261858    4888 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-152801"
	I1126 19:37:37.261897    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.262318    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:37.282495    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.288050    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1126 19:37:37.290991    4888 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1126 19:37:37.293840    4888 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1126 19:37:37.293861    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1126 19:37:37.294034    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.294201    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1126 19:37:37.297472    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1126 19:37:37.300633    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.302274    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1126 19:37:37.325735    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1126 19:37:37.329290    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1126 19:37:37.332091    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1126 19:37:37.332114    4888 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1126 19:37:37.332195    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.337081    4888 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1126 19:37:37.337124    4888 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1126 19:37:37.337191    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.338068    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.348242    4888 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1126 19:37:37.352485    4888 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:37:37.352544    4888 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1126 19:37:37.352775    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1126 19:37:37.352848    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.366542    4888 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 19:37:37.366559    4888 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 19:37:37.366618    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.382896    4888 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:37:37.385906    4888 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1126 19:37:37.391483    4888 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1126 19:37:37.391505    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1126 19:37:37.391619    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.414592    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.415972    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.416913    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.429834    4888 out.go:179]   - Using image docker.io/registry:3.0.0
	I1126 19:37:37.433663    4888 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1126 19:37:37.442036    4888 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1126 19:37:37.442059    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1126 19:37:37.442137    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.451566    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.485744    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.506905    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.540993    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.546403    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.552103    4888 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1126 19:37:37.557322    4888 out.go:179]   - Using image docker.io/busybox:stable
	I1126 19:37:37.560495    4888 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1126 19:37:37.560522    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1126 19:37:37.560581    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.574738    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.576134    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.592711    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.603941    4888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 19:37:37.606384    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.632450    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:38.010191    4888 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1126 19:37:38.010211    4888 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1126 19:37:38.071693    4888 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1126 19:37:38.071716    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1126 19:37:38.122435    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 19:37:38.126569    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1126 19:37:38.199250    4888 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1126 19:37:38.199277    4888 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1126 19:37:38.207965    4888 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1126 19:37:38.207994    4888 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1126 19:37:38.212922    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1126 19:37:38.212949    4888 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1126 19:37:38.218969    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1126 19:37:38.224303    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1126 19:37:38.225559    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1126 19:37:38.234907    4888 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1126 19:37:38.234948    4888 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1126 19:37:38.236518    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1126 19:37:38.237430    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1126 19:37:38.255756    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 19:37:38.266870    4888 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1126 19:37:38.266898    4888 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1126 19:37:38.268286    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1126 19:37:38.268306    4888 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1126 19:37:38.293770    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1126 19:37:38.297313    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1126 19:37:38.298655    4888 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1126 19:37:38.298676    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1126 19:37:38.367508    4888 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1126 19:37:38.367538    4888 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1126 19:37:38.379397    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1126 19:37:38.379441    4888 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1126 19:37:38.392964    4888 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1126 19:37:38.392991    4888 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1126 19:37:38.433173    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1126 19:37:38.466906    4888 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1126 19:37:38.466932    4888 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1126 19:37:38.531336    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1126 19:37:38.531377    4888 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1126 19:37:38.547877    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1126 19:37:38.547904    4888 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1126 19:37:38.594638    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1126 19:37:38.607650    4888 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1126 19:37:38.607676    4888 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1126 19:37:38.628336    4888 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.445120824s)
	I1126 19:37:38.628374    4888 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1126 19:37:38.629486    4888 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.025516135s)
	I1126 19:37:38.630195    4888 node_ready.go:35] waiting up to 6m0s for node "addons-152801" to be "Ready" ...
	I1126 19:37:38.728407    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1126 19:37:38.728433    4888 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1126 19:37:38.799156    4888 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1126 19:37:38.799186    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1126 19:37:38.825316    4888 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:37:38.825341    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1126 19:37:38.825804    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1126 19:37:38.875961    4888 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1126 19:37:38.875986    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1126 19:37:39.008084    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:37:39.117378    4888 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1126 19:37:39.117404    4888 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1126 19:37:39.132799    4888 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-152801" context rescaled to 1 replicas
	I1126 19:37:39.417001    4888 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1126 19:37:39.417024    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1126 19:37:39.642407    4888 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1126 19:37:39.642432    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1126 19:37:39.819138    4888 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1126 19:37:39.819161    4888 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1126 19:37:39.967278    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1126 19:37:40.643200    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:41.175326    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.052816156s)
	I1126 19:37:41.175433    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.048800725s)
	I1126 19:37:41.175458    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.956466949s)
	I1126 19:37:41.984385    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.760046174s)
	I1126 19:37:41.984689    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.759097139s)
	I1126 19:37:41.984744    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.74820781s)
	I1126 19:37:41.984830    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.747348921s)
	I1126 19:37:41.984882    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.729103078s)
	W1126 19:37:42.064655    4888 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1126 19:37:42.294087    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.00027886s)
	I1126 19:37:42.881475    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.286807616s)
	I1126 19:37:42.881552    4888 addons.go:495] Verifying addon metrics-server=true in "addons-152801"
	I1126 19:37:42.881625    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.05579855s)
	I1126 19:37:42.881364    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.448150738s)
	I1126 19:37:42.881868    4888 addons.go:495] Verifying addon registry=true in "addons-152801"
	I1126 19:37:42.881999    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.873879021s)
	W1126 19:37:42.882035    4888 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1126 19:37:42.882078    4888 retry.go:31] will retry after 208.686382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1126 19:37:42.882210    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.584873996s)
	I1126 19:37:42.882222    4888 addons.go:495] Verifying addon ingress=true in "addons-152801"
	I1126 19:37:42.884952    4888 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-152801 service yakd-dashboard -n yakd-dashboard
	
	I1126 19:37:42.885009    4888 out.go:179] * Verifying registry addon...
	I1126 19:37:42.886993    4888 out.go:179] * Verifying ingress addon...
	I1126 19:37:42.887754    4888 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1126 19:37:42.890766    4888 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1126 19:37:42.895688    4888 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1126 19:37:42.895711    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:42.896038    4888 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1126 19:37:42.896058    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:43.091838    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.124469735s)
	I1126 19:37:43.091874    4888 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-152801"
	I1126 19:37:43.092132    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:37:43.094988    4888 out.go:179] * Verifying csi-hostpath-driver addon...
	I1126 19:37:43.097689    4888 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1126 19:37:43.109090    4888 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1126 19:37:43.109112    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:43.140121    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:43.391898    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:43.394353    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:43.601047    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:43.891325    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:43.893646    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:44.100794    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:44.390792    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:44.393160    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:44.601762    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:44.891281    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:44.893298    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:44.910292    4888 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1126 19:37:44.910386    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:44.927371    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:45.064921    4888 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1126 19:37:45.103873    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:45.107421    4888 addons.go:239] Setting addon gcp-auth=true in "addons-152801"
	I1126 19:37:45.107550    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:45.108131    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:45.139302    4888 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1126 19:37:45.139359    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	W1126 19:37:45.159508    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:45.208953    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:45.391266    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:45.393437    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:45.602303    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:45.788116    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.695948181s)
	I1126 19:37:45.791446    4888 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:37:45.794597    4888 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1126 19:37:45.797277    4888 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1126 19:37:45.797303    4888 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1126 19:37:45.810109    4888 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1126 19:37:45.810132    4888 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1126 19:37:45.822759    4888 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1126 19:37:45.822791    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1126 19:37:45.836003    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1126 19:37:45.891436    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:45.894013    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:46.101505    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:46.345863    4888 addons.go:495] Verifying addon gcp-auth=true in "addons-152801"
	I1126 19:37:46.348908    4888 out.go:179] * Verifying gcp-auth addon...
	I1126 19:37:46.352611    4888 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1126 19:37:46.358610    4888 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1126 19:37:46.358677    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:46.391501    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:46.393679    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:46.600807    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:46.856258    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:46.891096    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:46.893177    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:47.100840    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:47.355622    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:47.391433    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:47.393751    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:47.600730    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:47.633186    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:47.856214    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:47.891046    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:47.893329    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:48.101082    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:48.355501    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:48.391302    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:48.393339    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:48.600534    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:48.855805    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:48.890839    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:48.893207    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:49.101343    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:49.356074    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:49.390726    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:49.394533    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:49.600540    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:49.633511    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:49.855362    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:49.891008    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:49.893498    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:50.101942    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:50.356235    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:50.391089    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:50.393541    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:50.600483    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:50.856258    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:50.891266    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:50.893788    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:51.100696    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:51.355809    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:51.390577    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:51.393864    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:51.601618    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:51.856237    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:51.891189    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:51.893546    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:52.100977    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:52.137038    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:52.355607    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:52.391392    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:52.393583    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:52.600693    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:52.856195    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:52.890744    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:52.894169    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:53.101051    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:53.355215    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:53.391110    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:53.393314    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:53.601343    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:53.855328    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:53.891099    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:53.893103    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:54.101710    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:54.356237    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:54.390856    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:54.393124    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:54.601047    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:54.633702    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:54.855341    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:54.891376    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:54.893946    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:55.101350    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:55.355907    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:55.390828    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:55.393299    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:55.601225    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:55.856055    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:55.890937    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:55.893433    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:56.101561    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:56.356061    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:56.390595    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:56.394322    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:56.601467    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:56.855791    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:56.891553    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:56.893750    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:57.100828    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:57.133468    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:57.355315    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:57.391233    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:57.393078    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:57.601069    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:57.856238    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:57.890759    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:57.892824    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:58.101385    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:58.356099    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:58.390669    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:58.394146    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:58.601349    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:58.856249    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:58.891047    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:58.893358    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:59.101077    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:59.133797    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:59.355631    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:59.391521    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:59.393335    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:59.601701    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:59.856124    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:59.890933    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:59.893359    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:00.116665    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:00.358907    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:00.392366    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:00.396851    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:00.600418    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:00.856133    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:00.890965    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:00.893096    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:01.100974    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:01.134336    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:01.356260    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:01.391093    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:01.393170    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:01.601571    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:01.856306    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:01.891157    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:01.893316    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:02.101571    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:02.357024    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:02.390672    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:02.394387    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:02.601221    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:02.855743    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:02.891247    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:02.893241    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:03.101339    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:03.355564    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:03.391328    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:03.393617    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:03.600669    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:03.633432    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:03.856317    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:03.891268    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:03.893640    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:04.100668    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:04.355480    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:04.391310    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:04.393609    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:04.600313    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:04.855837    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:04.890841    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:04.893374    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:05.101501    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:05.355341    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:05.391175    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:05.393309    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:05.601252    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:05.855966    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:05.890724    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:05.894160    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:06.101248    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:06.133328    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:06.356127    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:06.392049    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:06.394629    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:06.601330    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:06.855757    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:06.890626    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:06.894420    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:07.100203    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:07.355613    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:07.391384    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:07.393864    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:07.600759    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:07.856104    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:07.891213    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:07.893771    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:08.100589    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:08.133375    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:08.356131    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:08.390878    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:08.393190    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:08.601190    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:08.855623    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:08.892835    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:08.894057    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:09.101036    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:09.355761    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:09.391385    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:09.393319    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:09.601393    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:09.856385    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:09.891031    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:09.893418    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:10.100944    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:10.133773    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:10.355801    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:10.391408    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:10.397784    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:10.600391    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:10.855909    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:10.890941    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:10.893059    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:11.100992    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:11.356453    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:11.391152    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:11.393914    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:11.600881    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:11.856417    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:11.891419    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:11.893304    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:12.101701    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:12.356298    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:12.391156    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:12.393184    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:12.601324    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:12.633306    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:12.856303    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:12.891100    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:12.893400    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:13.101433    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:13.356097    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:13.390661    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:13.394203    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:13.600302    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:13.855673    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:13.891393    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:13.893553    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:14.101116    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:14.355569    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:14.391234    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:14.393067    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:14.600924    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:14.633639    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:14.855471    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:14.891437    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:14.893611    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:15.100399    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:15.356272    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:15.390967    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:15.392949    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:15.600817    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:15.855412    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:15.891104    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:15.893515    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:16.100662    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:16.355812    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:16.391413    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:16.393328    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:16.601246    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:16.633801    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:16.855411    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:16.891256    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:16.893612    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:17.101444    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:17.356181    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:17.390947    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:17.393377    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:17.601269    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:17.866718    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:17.895100    4888 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1126 19:38:17.895124    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:17.896482    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:18.190559    4888 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1126 19:38:18.190579    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:18.206063    4888 node_ready.go:49] node "addons-152801" is "Ready"
	I1126 19:38:18.206094    4888 node_ready.go:38] duration metric: took 39.575876315s for node "addons-152801" to be "Ready" ...
	I1126 19:38:18.206107    4888 api_server.go:52] waiting for apiserver process to appear ...
	I1126 19:38:18.206165    4888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:38:18.234516    4888 api_server.go:72] duration metric: took 41.344489873s to wait for apiserver process to appear ...
	I1126 19:38:18.234542    4888 api_server.go:88] waiting for apiserver healthz status ...
	I1126 19:38:18.234560    4888 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1126 19:38:18.255587    4888 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1126 19:38:18.263045    4888 api_server.go:141] control plane version: v1.34.1
	I1126 19:38:18.263078    4888 api_server.go:131] duration metric: took 28.528343ms to wait for apiserver health ...
	I1126 19:38:18.263088    4888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 19:38:18.275914    4888 system_pods.go:59] 19 kube-system pods found
	I1126 19:38:18.275954    4888 system_pods.go:61] "coredns-66bc5c9577-qvl2j" [9a754e8d-4928-4fe6-bbec-70cd718917a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:38:18.275961    4888 system_pods.go:61] "csi-hostpath-attacher-0" [ac1eb361-e9a5-46e2-aeba-7fd26ad0e2bd] Pending
	I1126 19:38:18.275967    4888 system_pods.go:61] "csi-hostpath-resizer-0" [1f8b64ed-95d4-474c-b903-60b6c40d6fc0] Pending
	I1126 19:38:18.275975    4888 system_pods.go:61] "csi-hostpathplugin-bshhs" [6c2e8d62-8ef5-4353-8976-9aa7c3e0f667] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:38:18.275984    4888 system_pods.go:61] "etcd-addons-152801" [18fbdd46-010b-4707-85b2-c468ca37ee6c] Running
	I1126 19:38:18.275989    4888 system_pods.go:61] "kindnet-ktxmd" [3e962ef8-76b0-4926-8cfe-671cd851c299] Running
	I1126 19:38:18.275998    4888 system_pods.go:61] "kube-apiserver-addons-152801" [61829c4e-f463-4940-9286-74b1f325de9d] Running
	I1126 19:38:18.276003    4888 system_pods.go:61] "kube-controller-manager-addons-152801" [71a44491-0938-4f2e-8895-a2c85e1c1c56] Running
	I1126 19:38:18.276010    4888 system_pods.go:61] "kube-ingress-dns-minikube" [1c3c1c68-369f-46ff-9770-a948533ddb27] Pending
	I1126 19:38:18.276017    4888 system_pods.go:61] "kube-proxy-7gdlf" [6e73b61c-4615-4c17-af0c-68ce10097f82] Running
	I1126 19:38:18.276021    4888 system_pods.go:61] "kube-scheduler-addons-152801" [9704324b-4662-41c0-ac6d-1673805bc0f0] Running
	I1126 19:38:18.276029    4888 system_pods.go:61] "metrics-server-85b7d694d7-tjllr" [13565e4b-5a4b-448e-b984-dc03582b70dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:38:18.276035    4888 system_pods.go:61] "nvidia-device-plugin-daemonset-rrntc" [658d2994-5e58-41f4-b7ef-fbca089ee861] Pending
	I1126 19:38:18.276039    4888 system_pods.go:61] "registry-6b586f9694-scxrq" [bc7f6a37-ea49-4566-bd97-21f1047456d7] Pending
	I1126 19:38:18.276046    4888 system_pods.go:61] "registry-creds-764b6fb674-hcfnw" [41effe6d-c599-4e98-96a5-69d9638038ac] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:38:18.276064    4888 system_pods.go:61] "registry-proxy-sdxpt" [bf573c71-ee84-46f1-b932-717861ec5583] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:38:18.276071    4888 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gphz4" [5e3110a5-4385-46b3-9aed-c258ebfe891d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:38:18.276079    4888 system_pods.go:61] "snapshot-controller-7d9fbc56b8-whphz" [5f669982-2853-4426-a238-6566bc04539b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:38:18.276085    4888 system_pods.go:61] "storage-provisioner" [6f084c96-db5e-4615-85a4-046b50712af8] Pending
	I1126 19:38:18.276091    4888 system_pods.go:74] duration metric: took 12.998021ms to wait for pod list to return data ...
	I1126 19:38:18.276099    4888 default_sa.go:34] waiting for default service account to be created ...
	I1126 19:38:18.283748    4888 default_sa.go:45] found service account: "default"
	I1126 19:38:18.283774    4888 default_sa.go:55] duration metric: took 7.669435ms for default service account to be created ...
	I1126 19:38:18.283784    4888 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 19:38:18.287307    4888 system_pods.go:86] 19 kube-system pods found
	I1126 19:38:18.287336    4888 system_pods.go:89] "coredns-66bc5c9577-qvl2j" [9a754e8d-4928-4fe6-bbec-70cd718917a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:38:18.287342    4888 system_pods.go:89] "csi-hostpath-attacher-0" [ac1eb361-e9a5-46e2-aeba-7fd26ad0e2bd] Pending
	I1126 19:38:18.287348    4888 system_pods.go:89] "csi-hostpath-resizer-0" [1f8b64ed-95d4-474c-b903-60b6c40d6fc0] Pending
	I1126 19:38:18.287355    4888 system_pods.go:89] "csi-hostpathplugin-bshhs" [6c2e8d62-8ef5-4353-8976-9aa7c3e0f667] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:38:18.287359    4888 system_pods.go:89] "etcd-addons-152801" [18fbdd46-010b-4707-85b2-c468ca37ee6c] Running
	I1126 19:38:18.287364    4888 system_pods.go:89] "kindnet-ktxmd" [3e962ef8-76b0-4926-8cfe-671cd851c299] Running
	I1126 19:38:18.287374    4888 system_pods.go:89] "kube-apiserver-addons-152801" [61829c4e-f463-4940-9286-74b1f325de9d] Running
	I1126 19:38:18.287378    4888 system_pods.go:89] "kube-controller-manager-addons-152801" [71a44491-0938-4f2e-8895-a2c85e1c1c56] Running
	I1126 19:38:18.287387    4888 system_pods.go:89] "kube-ingress-dns-minikube" [1c3c1c68-369f-46ff-9770-a948533ddb27] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:38:18.287391    4888 system_pods.go:89] "kube-proxy-7gdlf" [6e73b61c-4615-4c17-af0c-68ce10097f82] Running
	I1126 19:38:18.287396    4888 system_pods.go:89] "kube-scheduler-addons-152801" [9704324b-4662-41c0-ac6d-1673805bc0f0] Running
	I1126 19:38:18.287402    4888 system_pods.go:89] "metrics-server-85b7d694d7-tjllr" [13565e4b-5a4b-448e-b984-dc03582b70dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:38:18.287415    4888 system_pods.go:89] "nvidia-device-plugin-daemonset-rrntc" [658d2994-5e58-41f4-b7ef-fbca089ee861] Pending
	I1126 19:38:18.287419    4888 system_pods.go:89] "registry-6b586f9694-scxrq" [bc7f6a37-ea49-4566-bd97-21f1047456d7] Pending
	I1126 19:38:18.287426    4888 system_pods.go:89] "registry-creds-764b6fb674-hcfnw" [41effe6d-c599-4e98-96a5-69d9638038ac] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:38:18.287436    4888 system_pods.go:89] "registry-proxy-sdxpt" [bf573c71-ee84-46f1-b932-717861ec5583] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:38:18.287444    4888 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gphz4" [5e3110a5-4385-46b3-9aed-c258ebfe891d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:38:18.287450    4888 system_pods.go:89] "snapshot-controller-7d9fbc56b8-whphz" [5f669982-2853-4426-a238-6566bc04539b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:38:18.287455    4888 system_pods.go:89] "storage-provisioner" [6f084c96-db5e-4615-85a4-046b50712af8] Pending
	I1126 19:38:18.287471    4888 retry.go:31] will retry after 215.290936ms: missing components: kube-dns
	I1126 19:38:18.361140    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:18.391330    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:18.393912    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:18.523237    4888 system_pods.go:86] 19 kube-system pods found
	I1126 19:38:18.523275    4888 system_pods.go:89] "coredns-66bc5c9577-qvl2j" [9a754e8d-4928-4fe6-bbec-70cd718917a6] Running
	I1126 19:38:18.523287    4888 system_pods.go:89] "csi-hostpath-attacher-0" [ac1eb361-e9a5-46e2-aeba-7fd26ad0e2bd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1126 19:38:18.523295    4888 system_pods.go:89] "csi-hostpath-resizer-0" [1f8b64ed-95d4-474c-b903-60b6c40d6fc0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:38:18.523303    4888 system_pods.go:89] "csi-hostpathplugin-bshhs" [6c2e8d62-8ef5-4353-8976-9aa7c3e0f667] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:38:18.523308    4888 system_pods.go:89] "etcd-addons-152801" [18fbdd46-010b-4707-85b2-c468ca37ee6c] Running
	I1126 19:38:18.523313    4888 system_pods.go:89] "kindnet-ktxmd" [3e962ef8-76b0-4926-8cfe-671cd851c299] Running
	I1126 19:38:18.523322    4888 system_pods.go:89] "kube-apiserver-addons-152801" [61829c4e-f463-4940-9286-74b1f325de9d] Running
	I1126 19:38:18.523326    4888 system_pods.go:89] "kube-controller-manager-addons-152801" [71a44491-0938-4f2e-8895-a2c85e1c1c56] Running
	I1126 19:38:18.523336    4888 system_pods.go:89] "kube-ingress-dns-minikube" [1c3c1c68-369f-46ff-9770-a948533ddb27] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:38:18.523340    4888 system_pods.go:89] "kube-proxy-7gdlf" [6e73b61c-4615-4c17-af0c-68ce10097f82] Running
	I1126 19:38:18.523352    4888 system_pods.go:89] "kube-scheduler-addons-152801" [9704324b-4662-41c0-ac6d-1673805bc0f0] Running
	I1126 19:38:18.523358    4888 system_pods.go:89] "metrics-server-85b7d694d7-tjllr" [13565e4b-5a4b-448e-b984-dc03582b70dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:38:18.523364    4888 system_pods.go:89] "nvidia-device-plugin-daemonset-rrntc" [658d2994-5e58-41f4-b7ef-fbca089ee861] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1126 19:38:18.523375    4888 system_pods.go:89] "registry-6b586f9694-scxrq" [bc7f6a37-ea49-4566-bd97-21f1047456d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:38:18.523382    4888 system_pods.go:89] "registry-creds-764b6fb674-hcfnw" [41effe6d-c599-4e98-96a5-69d9638038ac] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:38:18.523395    4888 system_pods.go:89] "registry-proxy-sdxpt" [bf573c71-ee84-46f1-b932-717861ec5583] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:38:18.523401    4888 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gphz4" [5e3110a5-4385-46b3-9aed-c258ebfe891d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:38:18.523408    4888 system_pods.go:89] "snapshot-controller-7d9fbc56b8-whphz" [5f669982-2853-4426-a238-6566bc04539b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:38:18.523416    4888 system_pods.go:89] "storage-provisioner" [6f084c96-db5e-4615-85a4-046b50712af8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:38:18.523424    4888 system_pods.go:126] duration metric: took 239.634166ms to wait for k8s-apps to be running ...
	I1126 19:38:18.523436    4888 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 19:38:18.523493    4888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 19:38:18.542176    4888 system_svc.go:56] duration metric: took 18.730951ms WaitForService to wait for kubelet
	I1126 19:38:18.542206    4888 kubeadm.go:587] duration metric: took 41.652183595s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 19:38:18.542224    4888 node_conditions.go:102] verifying NodePressure condition ...
	I1126 19:38:18.547730    4888 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 19:38:18.547764    4888 node_conditions.go:123] node cpu capacity is 2
	I1126 19:38:18.547777    4888 node_conditions.go:105] duration metric: took 5.548148ms to run NodePressure ...
	I1126 19:38:18.547791    4888 start.go:242] waiting for startup goroutines ...
	I1126 19:38:18.610513    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:18.857137    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:18.958953    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:18.959419    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:19.102409    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:19.356417    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:19.391508    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:19.394380    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:19.621298    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:19.856606    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:19.891908    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:19.894306    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:20.102393    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:20.359934    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:20.464481    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:20.465091    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:20.603000    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:20.856553    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:20.891908    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:20.895179    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:21.103023    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:21.356010    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:21.391289    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:21.394157    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:21.602061    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:21.856628    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:21.891631    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:21.893721    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:22.102707    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:22.358517    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:22.402377    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:22.459371    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:22.609871    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:22.859369    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:22.892711    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:22.896772    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:23.112512    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:23.356470    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:23.395097    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:23.395187    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:23.601365    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:23.856337    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:23.899701    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:23.901707    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:24.101953    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:24.356023    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:24.392619    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:24.397292    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:24.608557    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:24.862062    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:24.893335    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:24.897604    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:25.106664    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:25.360718    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:25.397358    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:25.397597    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:25.601952    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:25.860032    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:25.890891    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:25.901584    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:26.101083    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:26.355991    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:26.391043    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:26.394608    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:26.605820    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:26.855458    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:26.891208    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:26.893866    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:27.101278    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:27.356152    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:27.391398    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:27.394432    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:27.601563    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:27.855840    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:27.891782    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:27.894553    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:28.101905    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:28.357477    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:28.393403    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:28.394978    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:28.602134    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:28.856373    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:28.892361    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:28.894182    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:29.102362    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:29.356727    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:29.390646    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:29.394616    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:29.601084    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:29.856349    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:29.893040    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:29.895510    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:30.102633    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:30.356354    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:30.392223    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:30.395230    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:30.602239    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:30.856574    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:30.892716    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:30.895167    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:31.102448    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:31.356090    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:31.392249    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:31.393758    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:31.601540    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:31.855634    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:31.891832    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:31.894032    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:32.102400    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:32.356468    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:32.391837    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:32.394877    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:32.602638    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:32.857023    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:32.892366    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:32.895314    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:33.103195    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:33.357363    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:33.391805    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:33.394553    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:33.601676    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:33.856074    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:33.890819    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:33.893480    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:34.102293    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:34.356657    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:34.390680    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:34.394477    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:34.603172    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:34.856690    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:34.890872    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:34.893369    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:35.101744    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:35.356283    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:35.391695    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:35.394486    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:35.602006    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:35.856073    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:35.891680    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:35.894393    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:36.102337    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:36.356380    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:36.401198    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:36.402740    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:36.601189    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:36.857114    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:36.891413    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:36.900533    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:37.101317    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:37.356364    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:37.458267    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:37.458549    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:37.601608    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:37.855899    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:37.890938    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:37.893426    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:38.102379    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:38.355487    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:38.392442    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:38.394166    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:38.601463    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:38.855859    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:38.890515    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:38.894124    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:39.101582    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:39.356802    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:39.390849    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:39.393811    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:39.601086    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:39.856725    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:39.890927    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:39.893727    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:40.101787    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:40.356601    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:40.391484    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:40.393608    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:40.601733    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:40.855259    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:40.891385    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:40.893953    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:41.101308    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:41.356422    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:41.391704    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:41.394288    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:41.602593    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:41.856582    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:41.891701    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:41.893974    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:42.102693    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:42.355578    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:42.391758    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:42.394343    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:42.601907    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:42.856294    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:42.892334    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:42.893856    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:43.101382    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:43.356769    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:43.391496    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:43.394044    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:43.601545    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:43.855318    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:43.891110    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:43.894158    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:44.101880    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:44.356008    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:44.390950    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:44.393327    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:44.607813    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:44.856170    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:44.891167    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:44.893734    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:45.109338    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:45.356189    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:45.393299    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:45.394280    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:45.601497    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:45.855565    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:45.891952    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:45.894777    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:46.108628    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:46.355653    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:46.393541    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:46.395601    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:46.603395    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:46.855770    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:46.891871    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:46.894396    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:47.101507    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:47.356135    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:47.391116    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:47.393830    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:47.600576    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:47.858473    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:47.894552    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:47.896759    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:48.101599    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:48.356054    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:48.391645    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:48.394483    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:48.601541    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:48.855641    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:48.892170    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:48.895568    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:49.101957    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:49.355534    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:49.392669    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:49.393837    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:49.601190    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:49.856262    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:49.891074    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:49.893385    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:50.106786    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:50.356360    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:50.391189    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:50.393396    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:50.601282    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:50.856733    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:50.890820    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:50.893354    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:51.101392    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:51.355653    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:51.392202    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:51.394851    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:51.600766    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:51.855871    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:51.890904    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:51.893579    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:52.100908    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:52.355558    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:52.391573    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:52.393913    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:52.601418    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:52.856409    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:52.891461    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:52.899010    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:53.101790    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:53.356271    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:53.391151    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:53.393485    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:53.602122    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:53.856581    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:53.891595    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:53.893735    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:54.102730    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:54.357146    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:54.393790    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:54.395790    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:54.601973    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:54.855951    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:54.893585    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:54.900984    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:55.102239    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:55.363073    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:55.391485    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:55.394887    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:55.601126    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:55.856530    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:55.891443    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:55.893790    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:56.101205    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:56.356039    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:56.391029    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:56.393322    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:56.601779    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:56.856643    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:56.891378    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:56.893662    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:57.101677    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:57.356063    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:57.391109    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:57.393475    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:57.600653    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:57.856192    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:57.893278    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:57.895359    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:58.101422    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:58.356663    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:58.391845    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:58.394431    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:58.601400    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:58.855892    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:58.890840    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:58.893438    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:59.101686    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:59.356249    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:59.391041    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:59.393579    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:59.601430    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:59.856580    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:59.891578    4888 kapi.go:107] duration metric: took 1m17.003822008s to wait for kubernetes.io/minikube-addons=registry ...
	I1126 19:38:59.893843    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:00.125911    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:00.364885    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:00.417515    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:00.602321    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:00.856294    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:00.894947    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:01.101438    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:01.357391    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:01.395072    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:01.603336    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:01.858486    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:01.894644    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:02.101770    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:02.356840    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:02.393887    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:02.601519    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:02.855624    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:02.894536    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:03.101050    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:03.356469    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:03.394954    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:03.601335    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:03.856485    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:03.900398    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:04.101381    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:04.356630    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:04.395373    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:04.603551    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:04.856275    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:04.894335    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:05.101913    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:05.356158    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:05.394353    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:05.601517    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:05.855636    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:05.894894    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:06.107460    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:06.356942    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:06.394107    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:06.601838    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:06.855870    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:06.894003    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:07.102081    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:07.356896    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:07.394463    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:07.600977    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:07.856625    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:07.894926    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:08.101771    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:08.355943    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:08.394577    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:08.603457    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:08.855523    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:08.896139    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:09.101704    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:09.356042    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:09.394604    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:09.601445    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:09.855421    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:09.894743    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:10.101423    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:10.356384    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:10.394190    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:10.601086    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:10.855543    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:10.894981    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:11.107640    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:11.357954    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:11.459421    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:11.602617    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:11.875098    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:11.900425    4888 kapi.go:107] duration metric: took 1m29.009658765s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1126 19:39:12.103094    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:12.356232    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:12.601891    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:12.856002    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:13.101877    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:13.356267    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:13.602027    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:13.856966    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:14.103467    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:14.355834    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:14.603125    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:14.856106    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:15.102408    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:15.355734    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:15.601789    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:15.856010    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:16.101970    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:16.356643    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:16.601427    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:16.855472    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:17.101456    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:17.355959    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:17.601917    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:17.856552    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:18.100635    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:18.356080    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:18.601879    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:18.856067    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:19.103446    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:19.356773    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:19.602760    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:19.856538    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:20.101618    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:20.356284    4888 kapi.go:107] duration metric: took 1m34.003669321s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1126 19:39:20.360210    4888 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-152801 cluster.
	I1126 19:39:20.363359    4888 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1126 19:39:20.366527    4888 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1126 19:39:20.602154    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:21.101506    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:21.601486    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:22.112263    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:22.602279    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:23.101052    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:23.601795    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:24.104906    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:24.604316    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:25.102092    4888 kapi.go:107] duration metric: took 1m42.004399157s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1126 19:39:25.108920    4888 out.go:179] * Enabled addons: storage-provisioner, cloud-spanner, registry-creds, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, default-storageclass, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1126 19:39:25.112490    4888 addons.go:530] duration metric: took 1m48.222169314s for enable addons: enabled=[storage-provisioner cloud-spanner registry-creds ingress-dns nvidia-device-plugin amd-gpu-device-plugin default-storageclass inspektor-gadget metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1126 19:39:25.112548    4888 start.go:247] waiting for cluster config update ...
	I1126 19:39:25.112571    4888 start.go:256] writing updated cluster config ...
	I1126 19:39:25.112905    4888 ssh_runner.go:195] Run: rm -f paused
	I1126 19:39:25.117659    4888 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 19:39:25.121134    4888 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qvl2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.126321    4888 pod_ready.go:94] pod "coredns-66bc5c9577-qvl2j" is "Ready"
	I1126 19:39:25.126349    4888 pod_ready.go:86] duration metric: took 5.188236ms for pod "coredns-66bc5c9577-qvl2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.129799    4888 pod_ready.go:83] waiting for pod "etcd-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.135015    4888 pod_ready.go:94] pod "etcd-addons-152801" is "Ready"
	I1126 19:39:25.135041    4888 pod_ready.go:86] duration metric: took 5.215353ms for pod "etcd-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.137367    4888 pod_ready.go:83] waiting for pod "kube-apiserver-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.142495    4888 pod_ready.go:94] pod "kube-apiserver-addons-152801" is "Ready"
	I1126 19:39:25.142522    4888 pod_ready.go:86] duration metric: took 5.131588ms for pod "kube-apiserver-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.145395    4888 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.521829    4888 pod_ready.go:94] pod "kube-controller-manager-addons-152801" is "Ready"
	I1126 19:39:25.521862    4888 pod_ready.go:86] duration metric: took 376.439693ms for pod "kube-controller-manager-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.722370    4888 pod_ready.go:83] waiting for pod "kube-proxy-7gdlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:26.121828    4888 pod_ready.go:94] pod "kube-proxy-7gdlf" is "Ready"
	I1126 19:39:26.121857    4888 pod_ready.go:86] duration metric: took 399.458833ms for pod "kube-proxy-7gdlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:26.322401    4888 pod_ready.go:83] waiting for pod "kube-scheduler-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:26.722030    4888 pod_ready.go:94] pod "kube-scheduler-addons-152801" is "Ready"
	I1126 19:39:26.722059    4888 pod_ready.go:86] duration metric: took 399.634637ms for pod "kube-scheduler-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:26.722072    4888 pod_ready.go:40] duration metric: took 1.60437999s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 19:39:26.781521    4888 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 19:39:26.784695    4888 out.go:179] * Done! kubectl is now configured to use "addons-152801" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 19:42:32 addons-152801 crio[833]: time="2025-11-26T19:42:32.688133762Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=dd0d8c4f-496e-4213-b876-80008dc6bb0d name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:42:32 addons-152801 crio[833]: time="2025-11-26T19:42:32.690289459Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=b6d8c54c-62a4-45b5-8d0e-e725ad31d940 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:42:32 addons-152801 crio[833]: time="2025-11-26T19:42:32.692407125Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-hcfnw/registry-creds" id=77ad2da8-ae0d-450f-99e1-3b800764f8d9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 19:42:32 addons-152801 crio[833]: time="2025-11-26T19:42:32.692529077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:42:32 addons-152801 crio[833]: time="2025-11-26T19:42:32.714038612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:42:32 addons-152801 crio[833]: time="2025-11-26T19:42:32.715030266Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:42:32 addons-152801 crio[833]: time="2025-11-26T19:42:32.74456467Z" level=info msg="Created container 3ccc2bf452fcf4a099186381bc2ec95b763762e2edb192ead8e4b28ba945b4f7: kube-system/registry-creds-764b6fb674-hcfnw/registry-creds" id=77ad2da8-ae0d-450f-99e1-3b800764f8d9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 19:42:32 addons-152801 crio[833]: time="2025-11-26T19:42:32.745497748Z" level=info msg="Starting container: 3ccc2bf452fcf4a099186381bc2ec95b763762e2edb192ead8e4b28ba945b4f7" id=f83124bc-07c5-4299-bd8c-3dd0ad0cdb08 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 19:42:32 addons-152801 crio[833]: time="2025-11-26T19:42:32.747219554Z" level=info msg="Started container" PID=6982 containerID=3ccc2bf452fcf4a099186381bc2ec95b763762e2edb192ead8e4b28ba945b4f7 description=kube-system/registry-creds-764b6fb674-hcfnw/registry-creds id=f83124bc-07c5-4299-bd8c-3dd0ad0cdb08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8dcc87e9eecd30d6c5eb256dfc423b8d36a10798da59a9ba3a39444ce66d3435
	Nov 26 19:42:32 addons-152801 conmon[6980]: conmon 3ccc2bf452fcf4a09918 <ninfo>: container 6982 exited with status 1
	Nov 26 19:42:33 addons-152801 crio[833]: time="2025-11-26T19:42:33.69879965Z" level=info msg="Removing container: c34cdc06e61e3b5d7a8c2b0f92d5959b52e513b84d28bc48ef06ea403cbf668d" id=cdbd2b42-3454-40c3-b29f-da8d45e01dcb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 19:42:33 addons-152801 crio[833]: time="2025-11-26T19:42:33.713178118Z" level=info msg="Error loading conmon cgroup of container c34cdc06e61e3b5d7a8c2b0f92d5959b52e513b84d28bc48ef06ea403cbf668d: cgroup deleted" id=cdbd2b42-3454-40c3-b29f-da8d45e01dcb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 19:42:33 addons-152801 crio[833]: time="2025-11-26T19:42:33.722961757Z" level=info msg="Removed container c34cdc06e61e3b5d7a8c2b0f92d5959b52e513b84d28bc48ef06ea403cbf668d: kube-system/registry-creds-764b6fb674-hcfnw/registry-creds" id=cdbd2b42-3454-40c3-b29f-da8d45e01dcb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 19:42:38 addons-152801 crio[833]: time="2025-11-26T19:42:38.502864861Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-rkwkf/POD" id=985fd532-6f2d-4a35-a14b-18b5d400b1c8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 19:42:38 addons-152801 crio[833]: time="2025-11-26T19:42:38.502952688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:42:38 addons-152801 crio[833]: time="2025-11-26T19:42:38.522618062Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-rkwkf Namespace:default ID:1459720fa39692958812d37d83d6a8d566cab84305ed26bbd50ca25e78a5d9da UID:06ced52e-2d9f-4fd0-96bc-5060409c01c5 NetNS:/var/run/netns/4bb87583-bbf6-40f4-8b07-1b0756287179 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40022464c8}] Aliases:map[]}"
	Nov 26 19:42:38 addons-152801 crio[833]: time="2025-11-26T19:42:38.522791402Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-rkwkf to CNI network \"kindnet\" (type=ptp)"
	Nov 26 19:42:38 addons-152801 crio[833]: time="2025-11-26T19:42:38.541823698Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-rkwkf Namespace:default ID:1459720fa39692958812d37d83d6a8d566cab84305ed26bbd50ca25e78a5d9da UID:06ced52e-2d9f-4fd0-96bc-5060409c01c5 NetNS:/var/run/netns/4bb87583-bbf6-40f4-8b07-1b0756287179 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40022464c8}] Aliases:map[]}"
	Nov 26 19:42:38 addons-152801 crio[833]: time="2025-11-26T19:42:38.542039015Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-rkwkf for CNI network kindnet (type=ptp)"
	Nov 26 19:42:38 addons-152801 crio[833]: time="2025-11-26T19:42:38.557810754Z" level=info msg="Ran pod sandbox 1459720fa39692958812d37d83d6a8d566cab84305ed26bbd50ca25e78a5d9da with infra container: default/hello-world-app-5d498dc89-rkwkf/POD" id=985fd532-6f2d-4a35-a14b-18b5d400b1c8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 19:42:38 addons-152801 crio[833]: time="2025-11-26T19:42:38.562103056Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c674224a-d5f9-49af-a5ff-68e7a49c916b name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:42:38 addons-152801 crio[833]: time="2025-11-26T19:42:38.562247522Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=c674224a-d5f9-49af-a5ff-68e7a49c916b name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:42:38 addons-152801 crio[833]: time="2025-11-26T19:42:38.562286119Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=c674224a-d5f9-49af-a5ff-68e7a49c916b name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:42:38 addons-152801 crio[833]: time="2025-11-26T19:42:38.563047293Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=5956f608-9433-4eeb-98a3-59b61b4af00b name=/runtime.v1.ImageService/PullImage
	Nov 26 19:42:38 addons-152801 crio[833]: time="2025-11-26T19:42:38.565684414Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	3ccc2bf452fcf       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             7 seconds ago       Exited              registry-creds                           1                   8dcc87e9eecd3       registry-creds-764b6fb674-hcfnw            kube-system
	6f67913fc8b68       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago       Running             nginx                                    0                   b4d0297417d12       nginx                                      default
	fce906cf12d01       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago       Running             busybox                                  0                   18103862e9352       busybox                                    default
	5cdc59e655381       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago       Running             csi-snapshotter                          0                   e362139e28f18       csi-hostpathplugin-bshhs                   kube-system
	0d2525ad7c6f9       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   e362139e28f18       csi-hostpathplugin-bshhs                   kube-system
	68f9098f874c1       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   e362139e28f18       csi-hostpathplugin-bshhs                   kube-system
	c7b9d11300784       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   e362139e28f18       csi-hostpathplugin-bshhs                   kube-system
	c40c5e8f24aca       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago       Running             gcp-auth                                 0                   55d6b44dddf48       gcp-auth-78565c9fb4-fks2w                  gcp-auth
	5f7e0a69f6079       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago       Running             gadget                                   0                   cd509f2dd6065       gadget-vnrsj                               gadget
	a4e36f02d445a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   e362139e28f18       csi-hostpathplugin-bshhs                   kube-system
	7f1a0ce591f6c       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago       Running             controller                               0                   75ca2c5bd84b3       ingress-nginx-controller-6c8bf45fb-j7qhq   ingress-nginx
	333ebda1f94e9       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago       Running             csi-resizer                              0                   5bb0c2a6662cb       csi-hostpath-resizer-0                     kube-system
	e4aba6b77535f       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago       Running             cloud-spanner-emulator                   0                   7f8baf59ccf19       cloud-spanner-emulator-5bdddb765-chzvk     default
	be6e4f7ecbd7c       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago       Running             registry-proxy                           0                   35ca34017c282       registry-proxy-sdxpt                       kube-system
	357f60871c591       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   85311bf5645a2       snapshot-controller-7d9fbc56b8-whphz       kube-system
	6b2cce003afc3       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago       Running             yakd                                     0                   4110db22e84b3       yakd-dashboard-5ff678cb9-4wcfn             yakd-dashboard
	bbda721ec7889       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago       Running             csi-attacher                             0                   02502b7824730       csi-hostpath-attacher-0                    kube-system
	5aa817b9fa068       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago       Running             nvidia-device-plugin-ctr                 0                   69f1d5b9dd084       nvidia-device-plugin-daemonset-rrntc       kube-system
	d4b8bdfa752c6       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago       Running             local-path-provisioner                   0                   0aa8ff336d827       local-path-provisioner-648f6765c9-gqgw2    local-path-storage
	33e2dbaa04cd8       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago       Running             csi-external-health-monitor-controller   0                   e362139e28f18       csi-hostpathplugin-bshhs                   kube-system
	2aecd6362c5e2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   4 minutes ago       Exited              patch                                    0                   a93d3e36814a3       ingress-nginx-admission-patch-xlj8c        ingress-nginx
	67ccc4b888832       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago       Running             registry                                 0                   98bfe50df195c       registry-6b586f9694-scxrq                  kube-system
	e3af750d29e79       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago       Running             volume-snapshot-controller               0                   50779a521dd2f       snapshot-controller-7d9fbc56b8-gphz4       kube-system
	3cd75fe86fc63       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago       Running             minikube-ingress-dns                     0                   39a1fa7f62fba       kube-ingress-dns-minikube                  kube-system
	3435418167dd8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   4 minutes ago       Exited              create                                   0                   3a1662169e2ef       ingress-nginx-admission-create-g8z27       ingress-nginx
	f900f636f3c4d       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago       Running             metrics-server                           0                   59bd798bb4e2a       metrics-server-85b7d694d7-tjllr            kube-system
	d0021ecd91f06       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   0b3bbfb2c610d       storage-provisioner                        kube-system
	2c15569036061       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago       Running             coredns                                  0                   edd4e41773c54       coredns-66bc5c9577-qvl2j                   kube-system
	4cfa09096b086       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago       Running             kindnet-cni                              0                   a20cd8059aa58       kindnet-ktxmd                              kube-system
	4f25a6570f326       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago       Running             kube-proxy                               0                   54a0245e9f072       kube-proxy-7gdlf                           kube-system
	4365cc22027bb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago       Running             etcd                                     0                   70cd354bee38a       etcd-addons-152801                         kube-system
	b21aa95449406       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago       Running             kube-apiserver                           0                   30f1a0eae29f4       kube-apiserver-addons-152801               kube-system
	899c0cef3d3c5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago       Running             kube-scheduler                           0                   9b6295b6b2ce1       kube-scheduler-addons-152801               kube-system
	6bd6a4e5eae30       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago       Running             kube-controller-manager                  0                   be7aadee1bb4b       kube-controller-manager-addons-152801      kube-system
	
	
	==> coredns [2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32] <==
	[INFO] 10.244.0.8:33810 - 42686 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002436694s
	[INFO] 10.244.0.8:33810 - 23777 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00011431s
	[INFO] 10.244.0.8:33810 - 9159 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00016473s
	[INFO] 10.244.0.8:51142 - 8440 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000169522s
	[INFO] 10.244.0.8:51142 - 8169 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000071579s
	[INFO] 10.244.0.8:56005 - 23420 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000916s
	[INFO] 10.244.0.8:56005 - 23174 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065015s
	[INFO] 10.244.0.8:34846 - 48615 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088868s
	[INFO] 10.244.0.8:34846 - 48170 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000166453s
	[INFO] 10.244.0.8:39118 - 4145 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001441218s
	[INFO] 10.244.0.8:39118 - 4565 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001594986s
	[INFO] 10.244.0.8:43309 - 61125 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00013433s
	[INFO] 10.244.0.8:43309 - 60722 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000084962s
	[INFO] 10.244.0.21:36407 - 18148 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000179565s
	[INFO] 10.244.0.21:51442 - 63332 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000122343s
	[INFO] 10.244.0.21:56301 - 48694 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091879s
	[INFO] 10.244.0.21:46758 - 35467 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096695s
	[INFO] 10.244.0.21:41552 - 55988 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099402s
	[INFO] 10.244.0.21:41942 - 34650 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000081385s
	[INFO] 10.244.0.21:44469 - 27731 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002628408s
	[INFO] 10.244.0.21:58678 - 23954 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002422497s
	[INFO] 10.244.0.21:42423 - 13049 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002177219s
	[INFO] 10.244.0.21:49599 - 53468 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002421029s
	[INFO] 10.244.0.23:39879 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000254887s
	[INFO] 10.244.0.23:44129 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000136522s
	
	
	==> describe nodes <==
	Name:               addons-152801
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-152801
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=addons-152801
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T19_37_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-152801
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-152801"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:37:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-152801
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 19:42:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 19:42:37 +0000   Wed, 26 Nov 2025 19:37:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 19:42:37 +0000   Wed, 26 Nov 2025 19:37:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 19:42:37 +0000   Wed, 26 Nov 2025 19:37:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 19:42:37 +0000   Wed, 26 Nov 2025 19:38:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-152801
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                bca91ee9-088f-4b6e-9b97-43c6020effa7
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	  default                     cloud-spanner-emulator-5bdddb765-chzvk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  default                     hello-world-app-5d498dc89-rkwkf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-vnrsj                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  gcp-auth                    gcp-auth-78565c9fb4-fks2w                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-j7qhq    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m58s
	  kube-system                 coredns-66bc5c9577-qvl2j                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m4s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 csi-hostpathplugin-bshhs                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 etcd-addons-152801                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m10s
	  kube-system                 kindnet-ktxmd                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m4s
	  kube-system                 kube-apiserver-addons-152801                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-controller-manager-addons-152801       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-proxy-7gdlf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-scheduler-addons-152801                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 metrics-server-85b7d694d7-tjllr             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m59s
	  kube-system                 nvidia-device-plugin-daemonset-rrntc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 registry-6b586f9694-scxrq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 registry-creds-764b6fb674-hcfnw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 registry-proxy-sdxpt                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 snapshot-controller-7d9fbc56b8-gphz4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 snapshot-controller-7d9fbc56b8-whphz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  local-path-storage          local-path-provisioner-648f6765c9-gqgw2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-4wcfn              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m2s                   kube-proxy       
	  Normal   Starting                 5m17s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m16s (x8 over 5m17s)  kubelet          Node addons-152801 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m16s (x8 over 5m17s)  kubelet          Node addons-152801 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m16s (x8 over 5m17s)  kubelet          Node addons-152801 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m9s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m9s                   kubelet          Node addons-152801 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m9s                   kubelet          Node addons-152801 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m9s                   kubelet          Node addons-152801 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m5s                   node-controller  Node addons-152801 event: Registered Node addons-152801 in Controller
	  Normal   NodeReady                4m23s                  kubelet          Node addons-152801 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov26 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014220] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507172] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032749] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.773464] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.697672] kauditd_printk_skb: 36 callbacks suppressed
	[Nov26 19:37] overlayfs: idmapped layers are currently not supported
	[  +0.074077] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov26 19:39] hrtimer: interrupt took 16123050 ns
	
	
	==> etcd [4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72] <==
	{"level":"warn","ts":"2025-11-26T19:37:27.394778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.422081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.425511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.442129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.458914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.477888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.493332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.518706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.538997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.550502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.567327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.584398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.602487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.618700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.640121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.660885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.676551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.705688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.758105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:43.411845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:43.433726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:38:05.438651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:38:05.454589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:38:05.481188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:38:05.497244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37394","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [c40c5e8f24acaac35ca06de5e702a8ee04f3e12e10d17eeddaf164cea9753db5] <==
	2025/11/26 19:39:19 GCP Auth Webhook started!
	2025/11/26 19:39:27 Ready to marshal response ...
	2025/11/26 19:39:27 Ready to write response ...
	2025/11/26 19:39:27 Ready to marshal response ...
	2025/11/26 19:39:27 Ready to write response ...
	2025/11/26 19:39:28 Ready to marshal response ...
	2025/11/26 19:39:28 Ready to write response ...
	2025/11/26 19:39:49 Ready to marshal response ...
	2025/11/26 19:39:49 Ready to write response ...
	2025/11/26 19:39:49 Ready to marshal response ...
	2025/11/26 19:39:49 Ready to write response ...
	2025/11/26 19:39:49 Ready to marshal response ...
	2025/11/26 19:39:49 Ready to write response ...
	2025/11/26 19:39:59 Ready to marshal response ...
	2025/11/26 19:39:59 Ready to write response ...
	2025/11/26 19:40:09 Ready to marshal response ...
	2025/11/26 19:40:09 Ready to write response ...
	2025/11/26 19:40:15 Ready to marshal response ...
	2025/11/26 19:40:15 Ready to write response ...
	2025/11/26 19:40:45 Ready to marshal response ...
	2025/11/26 19:40:45 Ready to write response ...
	2025/11/26 19:42:38 Ready to marshal response ...
	2025/11/26 19:42:38 Ready to write response ...
	
	
	==> kernel <==
	 19:42:40 up 24 min,  0 user,  load average: 0.68, 1.03, 0.56
	Linux addons-152801 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707] <==
	I1126 19:40:37.134383       1 main.go:301] handling current node
	I1126 19:40:47.134852       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:40:47.134885       1 main.go:301] handling current node
	I1126 19:40:57.135035       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:40:57.135064       1 main.go:301] handling current node
	I1126 19:41:07.138155       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:41:07.138190       1 main.go:301] handling current node
	I1126 19:41:17.134793       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:41:17.134830       1 main.go:301] handling current node
	I1126 19:41:27.134862       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:41:27.134895       1 main.go:301] handling current node
	I1126 19:41:37.142303       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:41:37.142418       1 main.go:301] handling current node
	I1126 19:41:47.140612       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:41:47.140714       1 main.go:301] handling current node
	I1126 19:41:57.134780       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:41:57.134815       1 main.go:301] handling current node
	I1126 19:42:07.142018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:42:07.142053       1 main.go:301] handling current node
	I1126 19:42:17.135970       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:42:17.136159       1 main.go:301] handling current node
	I1126 19:42:27.134840       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:42:27.134874       1 main.go:301] handling current node
	I1126 19:42:37.134442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:42:37.134651       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515] <==
	W1126 19:38:05.481173       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:38:05.496902       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:38:17.701561       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.245.89:443: connect: connection refused
	E1126 19:38:17.706676       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.245.89:443: connect: connection refused" logger="UnhandledError"
	W1126 19:38:17.707504       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.245.89:443: connect: connection refused
	E1126 19:38:17.707674       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.245.89:443: connect: connection refused" logger="UnhandledError"
	W1126 19:38:17.801241       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.245.89:443: connect: connection refused
	E1126 19:38:17.801283       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.245.89:443: connect: connection refused" logger="UnhandledError"
	E1126 19:38:34.655076       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.157.237:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.157.237:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.157.237:443: connect: connection refused" logger="UnhandledError"
	W1126 19:38:34.655248       1 handler_proxy.go:99] no RequestInfo found in the context
	E1126 19:38:34.655332       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1126 19:38:34.656452       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.157.237:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.157.237:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.157.237:443: connect: connection refused" logger="UnhandledError"
	E1126 19:38:34.661312       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.157.237:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.157.237:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.157.237:443: connect: connection refused" logger="UnhandledError"
	I1126 19:38:34.756765       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1126 19:39:36.288615       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58764: use of closed network connection
	E1126 19:39:36.526512       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58782: use of closed network connection
	E1126 19:39:36.670127       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58810: use of closed network connection
	I1126 19:40:15.171464       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1126 19:40:15.487379       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.173.6"}
	I1126 19:40:24.241630       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1126 19:40:26.249416       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1126 19:42:38.407619       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.45.229"}
	
	
	==> kube-controller-manager [6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b] <==
	I1126 19:37:35.473994       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 19:37:35.474103       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 19:37:35.474148       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1126 19:37:35.474220       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 19:37:35.474292       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 19:37:35.474507       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 19:37:35.476558       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 19:37:35.477812       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 19:37:35.477894       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 19:37:35.477948       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 19:37:35.479899       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 19:37:35.481956       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 19:37:35.482915       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 19:37:35.482990       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 19:37:35.483021       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 19:37:35.483050       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 19:37:35.511020       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-152801" podCIDRs=["10.244.0.0/24"]
	E1126 19:38:05.431245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1126 19:38:05.431413       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1126 19:38:05.431457       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1126 19:38:05.463765       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1126 19:38:05.474228       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1126 19:38:05.532638       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:38:05.575038       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 19:38:20.462105       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375] <==
	I1126 19:37:36.845279       1 server_linux.go:53] "Using iptables proxy"
	I1126 19:37:36.921712       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 19:37:37.022480       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 19:37:37.022556       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1126 19:37:37.022638       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 19:37:37.237448       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 19:37:37.273758       1 server_linux.go:132] "Using iptables Proxier"
	I1126 19:37:37.524151       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 19:37:37.547615       1 server.go:527] "Version info" version="v1.34.1"
	I1126 19:37:37.547649       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 19:37:37.589274       1 config.go:200] "Starting service config controller"
	I1126 19:37:37.589297       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 19:37:37.589460       1 config.go:106] "Starting endpoint slice config controller"
	I1126 19:37:37.589466       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 19:37:37.589550       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 19:37:37.589554       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 19:37:37.600462       1 config.go:309] "Starting node config controller"
	I1126 19:37:37.600485       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 19:37:37.600493       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 19:37:37.689803       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 19:37:37.689841       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 19:37:37.689882       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353] <==
	E1126 19:37:28.534607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 19:37:28.534641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:37:28.534696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 19:37:28.534730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 19:37:28.534763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 19:37:28.534796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 19:37:28.538811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:37:28.538992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:37:28.539073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 19:37:29.385458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 19:37:29.385458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 19:37:29.399025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 19:37:29.403829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 19:37:29.403842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 19:37:29.419365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 19:37:29.481522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 19:37:29.595155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 19:37:29.624446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 19:37:29.636121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 19:37:29.651869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:37:29.674867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 19:37:29.712319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 19:37:29.776478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:37:29.818237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1126 19:37:31.594505       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 19:40:54 addons-152801 kubelet[1253]: I1126 19:40:54.237878    1253 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0538f8c5-abd7-4885-8202-2c2775c6eb2c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^ccb0c2d5-caff-11f0-b93c-0ae92ad0c977\") on node \"addons-152801\" "
	Nov 26 19:40:54 addons-152801 kubelet[1253]: I1126 19:40:54.237963    1253 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tqk5t\" (UniqueName: \"kubernetes.io/projected/a0b730a5-84e4-42b5-b260-047bbffbdeba-kube-api-access-tqk5t\") on node \"addons-152801\" DevicePath \"\""
	Nov 26 19:40:54 addons-152801 kubelet[1253]: I1126 19:40:54.243412    1253 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-0538f8c5-abd7-4885-8202-2c2775c6eb2c" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^ccb0c2d5-caff-11f0-b93c-0ae92ad0c977") on node "addons-152801"
	Nov 26 19:40:54 addons-152801 kubelet[1253]: I1126 19:40:54.338934    1253 reconciler_common.go:299] "Volume detached for volume \"pvc-0538f8c5-abd7-4885-8202-2c2775c6eb2c\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^ccb0c2d5-caff-11f0-b93c-0ae92ad0c977\") on node \"addons-152801\" DevicePath \"\""
	Nov 26 19:40:54 addons-152801 kubelet[1253]: I1126 19:40:54.346282    1253 scope.go:117] "RemoveContainer" containerID="fe91884e74a6f55d2d1044a833d0c76afc028587aaff3a3059a0072c256ce1d5"
	Nov 26 19:40:54 addons-152801 kubelet[1253]: I1126 19:40:54.356413    1253 scope.go:117] "RemoveContainer" containerID="fe91884e74a6f55d2d1044a833d0c76afc028587aaff3a3059a0072c256ce1d5"
	Nov 26 19:40:54 addons-152801 kubelet[1253]: E1126 19:40:54.356875    1253 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe91884e74a6f55d2d1044a833d0c76afc028587aaff3a3059a0072c256ce1d5\": container with ID starting with fe91884e74a6f55d2d1044a833d0c76afc028587aaff3a3059a0072c256ce1d5 not found: ID does not exist" containerID="fe91884e74a6f55d2d1044a833d0c76afc028587aaff3a3059a0072c256ce1d5"
	Nov 26 19:40:54 addons-152801 kubelet[1253]: I1126 19:40:54.357012    1253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe91884e74a6f55d2d1044a833d0c76afc028587aaff3a3059a0072c256ce1d5"} err="failed to get container status \"fe91884e74a6f55d2d1044a833d0c76afc028587aaff3a3059a0072c256ce1d5\": rpc error: code = NotFound desc = could not find container \"fe91884e74a6f55d2d1044a833d0c76afc028587aaff3a3059a0072c256ce1d5\": container with ID starting with fe91884e74a6f55d2d1044a833d0c76afc028587aaff3a3059a0072c256ce1d5 not found: ID does not exist"
	Nov 26 19:40:55 addons-152801 kubelet[1253]: I1126 19:40:55.091921    1253 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b730a5-84e4-42b5-b260-047bbffbdeba" path="/var/lib/kubelet/pods/a0b730a5-84e4-42b5-b260-047bbffbdeba/volumes"
	Nov 26 19:41:07 addons-152801 kubelet[1253]: I1126 19:41:07.087205    1253 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-scxrq" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:41:16 addons-152801 kubelet[1253]: I1126 19:41:16.087795    1253 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rrntc" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:41:24 addons-152801 kubelet[1253]: I1126 19:41:24.087733    1253 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-sdxpt" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:42:23 addons-152801 kubelet[1253]: I1126 19:42:23.087890    1253 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-scxrq" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:42:28 addons-152801 kubelet[1253]: I1126 19:42:28.087937    1253 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-hcfnw" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:42:31 addons-152801 kubelet[1253]: E1126 19:42:31.250701    1253 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f917c0fa92e3f0ee8b75303af6b2bfc57177dc8be7c4cfd7bcab24b1e383a80e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f917c0fa92e3f0ee8b75303af6b2bfc57177dc8be7c4cfd7bcab24b1e383a80e/diff: no such file or directory, extraDiskErr: <nil>
	Nov 26 19:42:32 addons-152801 kubelet[1253]: I1126 19:42:32.687128    1253 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-hcfnw" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:42:32 addons-152801 kubelet[1253]: I1126 19:42:32.687190    1253 scope.go:117] "RemoveContainer" containerID="c34cdc06e61e3b5d7a8c2b0f92d5959b52e513b84d28bc48ef06ea403cbf668d"
	Nov 26 19:42:33 addons-152801 kubelet[1253]: I1126 19:42:33.692740    1253 scope.go:117] "RemoveContainer" containerID="c34cdc06e61e3b5d7a8c2b0f92d5959b52e513b84d28bc48ef06ea403cbf668d"
	Nov 26 19:42:33 addons-152801 kubelet[1253]: I1126 19:42:33.693102    1253 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-hcfnw" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:42:33 addons-152801 kubelet[1253]: I1126 19:42:33.693139    1253 scope.go:117] "RemoveContainer" containerID="3ccc2bf452fcf4a099186381bc2ec95b763762e2edb192ead8e4b28ba945b4f7"
	Nov 26 19:42:33 addons-152801 kubelet[1253]: E1126 19:42:33.693307    1253 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-hcfnw_kube-system(41effe6d-c599-4e98-96a5-69d9638038ac)\"" pod="kube-system/registry-creds-764b6fb674-hcfnw" podUID="41effe6d-c599-4e98-96a5-69d9638038ac"
	Nov 26 19:42:35 addons-152801 kubelet[1253]: I1126 19:42:35.088111    1253 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rrntc" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:42:38 addons-152801 kubelet[1253]: I1126 19:42:38.228191    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/06ced52e-2d9f-4fd0-96bc-5060409c01c5-gcp-creds\") pod \"hello-world-app-5d498dc89-rkwkf\" (UID: \"06ced52e-2d9f-4fd0-96bc-5060409c01c5\") " pod="default/hello-world-app-5d498dc89-rkwkf"
	Nov 26 19:42:38 addons-152801 kubelet[1253]: I1126 19:42:38.228258    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfks6\" (UniqueName: \"kubernetes.io/projected/06ced52e-2d9f-4fd0-96bc-5060409c01c5-kube-api-access-zfks6\") pod \"hello-world-app-5d498dc89-rkwkf\" (UID: \"06ced52e-2d9f-4fd0-96bc-5060409c01c5\") " pod="default/hello-world-app-5d498dc89-rkwkf"
	Nov 26 19:42:38 addons-152801 kubelet[1253]: W1126 19:42:38.556542    1253 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae/crio-1459720fa39692958812d37d83d6a8d566cab84305ed26bbd50ca25e78a5d9da WatchSource:0}: Error finding container 1459720fa39692958812d37d83d6a8d566cab84305ed26bbd50ca25e78a5d9da: Status 404 returned error can't find the container with id 1459720fa39692958812d37d83d6a8d566cab84305ed26bbd50ca25e78a5d9da
	
	
	==> storage-provisioner [d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd] <==
	W1126 19:42:16.325150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:18.328023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:18.332222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:20.336749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:20.344470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:22.348123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:22.352466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:24.355609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:24.360020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:26.363509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:26.368083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:28.371451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:28.376055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:30.379743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:30.387515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:32.390111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:32.394907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:34.399479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:34.404179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:36.407351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:36.412273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:38.424587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:38.436481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:40.449682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:42:40.456378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-152801 -n addons-152801
helpers_test.go:269: (dbg) Run:  kubectl --context addons-152801 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-rkwkf ingress-nginx-admission-create-g8z27 ingress-nginx-admission-patch-xlj8c
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-152801 describe pod hello-world-app-5d498dc89-rkwkf ingress-nginx-admission-create-g8z27 ingress-nginx-admission-patch-xlj8c
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-152801 describe pod hello-world-app-5d498dc89-rkwkf ingress-nginx-admission-create-g8z27 ingress-nginx-admission-patch-xlj8c: exit status 1 (108.024052ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-rkwkf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-152801/192.168.49.2
	Start Time:       Wed, 26 Nov 2025 19:42:38 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zfks6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zfks6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-rkwkf to addons-152801
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-g8z27" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xlj8c" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-152801 describe pod hello-world-app-5d498dc89-rkwkf ingress-nginx-admission-create-g8z27 ingress-nginx-admission-patch-xlj8c: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (324.691491ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:42:41.549612   14435 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:42:41.549818   14435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:42:41.549832   14435 out.go:374] Setting ErrFile to fd 2...
	I1126 19:42:41.549838   14435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:42:41.550344   14435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:42:41.550862   14435 mustload.go:66] Loading cluster: addons-152801
	I1126 19:42:41.551297   14435 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:42:41.551320   14435 addons.go:622] checking whether the cluster is paused
	I1126 19:42:41.551465   14435 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:42:41.551484   14435 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:42:41.552047   14435 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:42:41.586353   14435 ssh_runner.go:195] Run: systemctl --version
	I1126 19:42:41.586427   14435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:42:41.607463   14435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:42:41.730471   14435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:42:41.730573   14435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:42:41.778384   14435 cri.go:89] found id: "3ccc2bf452fcf4a099186381bc2ec95b763762e2edb192ead8e4b28ba945b4f7"
	I1126 19:42:41.778403   14435 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:42:41.778407   14435 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:42:41.778420   14435 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:42:41.778424   14435 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:42:41.778427   14435 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:42:41.778430   14435 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:42:41.778433   14435 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:42:41.778437   14435 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:42:41.778443   14435 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:42:41.778447   14435 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:42:41.778450   14435 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:42:41.778453   14435 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:42:41.778455   14435 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:42:41.778459   14435 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:42:41.778463   14435 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:42:41.778466   14435 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:42:41.778470   14435 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:42:41.778473   14435 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:42:41.778476   14435 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:42:41.778480   14435 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:42:41.778483   14435 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:42:41.778485   14435 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:42:41.778488   14435 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:42:41.778491   14435 cri.go:89] found id: ""
	I1126 19:42:41.778542   14435 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:42:41.803557   14435 out.go:203] 
	W1126 19:42:41.806565   14435 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:42:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:42:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:42:41.806591   14435 out.go:285] * 
	* 
	W1126 19:42:41.811345   14435 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:42:41.814278   14435 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable ingress --alsologtostderr -v=1: exit status 11 (296.982696ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:42:41.871078   14558 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:42:41.871241   14558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:42:41.871246   14558 out.go:374] Setting ErrFile to fd 2...
	I1126 19:42:41.871252   14558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:42:41.871503   14558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:42:41.871775   14558 mustload.go:66] Loading cluster: addons-152801
	I1126 19:42:41.872171   14558 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:42:41.872186   14558 addons.go:622] checking whether the cluster is paused
	I1126 19:42:41.872289   14558 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:42:41.872301   14558 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:42:41.872777   14558 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:42:41.890463   14558 ssh_runner.go:195] Run: systemctl --version
	I1126 19:42:41.890524   14558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:42:41.912262   14558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:42:42.039023   14558 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:42:42.039134   14558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:42:42.076736   14558 cri.go:89] found id: "3ccc2bf452fcf4a099186381bc2ec95b763762e2edb192ead8e4b28ba945b4f7"
	I1126 19:42:42.076757   14558 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:42:42.076762   14558 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:42:42.076766   14558 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:42:42.076770   14558 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:42:42.076774   14558 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:42:42.076778   14558 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:42:42.076781   14558 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:42:42.076785   14558 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:42:42.076792   14558 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:42:42.076795   14558 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:42:42.076799   14558 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:42:42.076802   14558 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:42:42.076805   14558 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:42:42.076808   14558 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:42:42.076813   14558 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:42:42.076817   14558 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:42:42.076822   14558 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:42:42.076826   14558 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:42:42.076830   14558 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:42:42.076835   14558 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:42:42.076838   14558 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:42:42.076841   14558 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:42:42.076844   14558 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:42:42.076847   14558 cri.go:89] found id: ""
	I1126 19:42:42.076901   14558 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:42:42.094136   14558 out.go:203] 
	W1126 19:42:42.097404   14558 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:42:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:42:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:42:42.097500   14558 out.go:285] * 
	* 
	W1126 19:42:42.103210   14558 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:42:42.107622   14558 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.32s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-vnrsj" [a6478616-5591-4afc-a2e9-9d98b46b5222] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003264585s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (245.877424ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:40:14.659171   12384 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:40:14.659323   12384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:40:14.659335   12384 out.go:374] Setting ErrFile to fd 2...
	I1126 19:40:14.659341   12384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:40:14.659711   12384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:40:14.660044   12384 mustload.go:66] Loading cluster: addons-152801
	I1126 19:40:14.660669   12384 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:40:14.660687   12384 addons.go:622] checking whether the cluster is paused
	I1126 19:40:14.660815   12384 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:40:14.660830   12384 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:40:14.661544   12384 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:40:14.680526   12384 ssh_runner.go:195] Run: systemctl --version
	I1126 19:40:14.680595   12384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:40:14.697519   12384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:40:14.800606   12384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:40:14.800696   12384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:40:14.830243   12384 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:40:14.830261   12384 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:40:14.830267   12384 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:40:14.830270   12384 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:40:14.830274   12384 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:40:14.830277   12384 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:40:14.830280   12384 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:40:14.830283   12384 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:40:14.830286   12384 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:40:14.830291   12384 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:40:14.830294   12384 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:40:14.830297   12384 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:40:14.830300   12384 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:40:14.830303   12384 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:40:14.830306   12384 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:40:14.830314   12384 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:40:14.830317   12384 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:40:14.830321   12384 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:40:14.830324   12384 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:40:14.830327   12384 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:40:14.830332   12384 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:40:14.830335   12384 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:40:14.830338   12384 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:40:14.830340   12384 cri.go:89] found id: ""
	I1126 19:40:14.830394   12384 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:40:14.845200   12384 out.go:203] 
	W1126 19:40:14.848114   12384 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:40:14.848135   12384 out.go:285] * 
	* 
	W1126 19:40:14.853132   12384 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:40:14.856026   12384 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.377649ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-tjllr" [13565e4b-5a4b-448e-b984-dc03582b70dc] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004251222s
addons_test.go:463: (dbg) Run:  kubectl --context addons-152801 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (262.935499ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:40:08.400081   12267 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:40:08.400331   12267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:40:08.400362   12267 out.go:374] Setting ErrFile to fd 2...
	I1126 19:40:08.400383   12267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:40:08.400673   12267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:40:08.401016   12267 mustload.go:66] Loading cluster: addons-152801
	I1126 19:40:08.401444   12267 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:40:08.401483   12267 addons.go:622] checking whether the cluster is paused
	I1126 19:40:08.401626   12267 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:40:08.401655   12267 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:40:08.402266   12267 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:40:08.421770   12267 ssh_runner.go:195] Run: systemctl --version
	I1126 19:40:08.421827   12267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:40:08.439326   12267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:40:08.544621   12267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:40:08.544711   12267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:40:08.574853   12267 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:40:08.574873   12267 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:40:08.574878   12267 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:40:08.574883   12267 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:40:08.574886   12267 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:40:08.574891   12267 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:40:08.574895   12267 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:40:08.574898   12267 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:40:08.574903   12267 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:40:08.574925   12267 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:40:08.574934   12267 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:40:08.574938   12267 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:40:08.574941   12267 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:40:08.574944   12267 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:40:08.574947   12267 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:40:08.574957   12267 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:40:08.574965   12267 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:40:08.574969   12267 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:40:08.574973   12267 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:40:08.574977   12267 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:40:08.574982   12267 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:40:08.574992   12267 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:40:08.574999   12267 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:40:08.575002   12267 cri.go:89] found id: ""
	I1126 19:40:08.575048   12267 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:40:08.591666   12267 out.go:203] 
	W1126 19:40:08.594520   12267 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:40:08.594540   12267 out.go:285] * 
	* 
	W1126 19:40:08.599384   12267 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:40:08.602323   12267 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1126 19:40:00.582887    4129 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1126 19:40:00.590108    4129 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1126 19:40:00.590143    4129 kapi.go:107] duration metric: took 7.269819ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.282111ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-152801 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-152801 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [12ae902b-854c-4f06-9241-5b304fb519ae] Pending
helpers_test.go:352: "task-pv-pod" [12ae902b-854c-4f06-9241-5b304fb519ae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [12ae902b-854c-4f06-9241-5b304fb519ae] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.003772831s
addons_test.go:572: (dbg) Run:  kubectl --context addons-152801 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-152801 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-152801 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-152801 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-152801 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-152801 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-152801 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [a0b730a5-84e4-42b5-b260-047bbffbdeba] Pending
helpers_test.go:352: "task-pv-pod-restore" [a0b730a5-84e4-42b5-b260-047bbffbdeba] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [a0b730a5-84e4-42b5-b260-047bbffbdeba] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00280632s
addons_test.go:614: (dbg) Run:  kubectl --context addons-152801 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-152801 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-152801 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (270.472254ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:40:54.823919   13301 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:40:54.824120   13301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:40:54.824151   13301 out.go:374] Setting ErrFile to fd 2...
	I1126 19:40:54.824173   13301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:40:54.824464   13301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:40:54.824777   13301 mustload.go:66] Loading cluster: addons-152801
	I1126 19:40:54.825175   13301 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:40:54.825218   13301 addons.go:622] checking whether the cluster is paused
	I1126 19:40:54.825344   13301 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:40:54.825377   13301 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:40:54.825903   13301 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:40:54.845007   13301 ssh_runner.go:195] Run: systemctl --version
	I1126 19:40:54.845072   13301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:40:54.865331   13301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:40:54.972256   13301 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:40:54.972340   13301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:40:55.002317   13301 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:40:55.002339   13301 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:40:55.002344   13301 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:40:55.002355   13301 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:40:55.002358   13301 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:40:55.002364   13301 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:40:55.002367   13301 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:40:55.002370   13301 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:40:55.002373   13301 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:40:55.002379   13301 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:40:55.002383   13301 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:40:55.002386   13301 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:40:55.002389   13301 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:40:55.002392   13301 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:40:55.002395   13301 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:40:55.002400   13301 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:40:55.002407   13301 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:40:55.002411   13301 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:40:55.002416   13301 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:40:55.002419   13301 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:40:55.002423   13301 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:40:55.002426   13301 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:40:55.002429   13301 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:40:55.002432   13301 cri.go:89] found id: ""
	I1126 19:40:55.002484   13301 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:40:55.016258   13301 out.go:203] 
	W1126 19:40:55.022538   13301 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:40:55.022579   13301 out.go:285] * 
	* 
	W1126 19:40:55.027964   13301 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:40:55.040894   13301 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (303.326035ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:40:55.123385   13343 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:40:55.123657   13343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:40:55.123691   13343 out.go:374] Setting ErrFile to fd 2...
	I1126 19:40:55.123711   13343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:40:55.124050   13343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:40:55.124520   13343 mustload.go:66] Loading cluster: addons-152801
	I1126 19:40:55.125030   13343 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:40:55.125081   13343 addons.go:622] checking whether the cluster is paused
	I1126 19:40:55.125248   13343 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:40:55.125283   13343 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:40:55.126251   13343 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:40:55.155797   13343 ssh_runner.go:195] Run: systemctl --version
	I1126 19:40:55.155881   13343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:40:55.178007   13343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:40:55.288588   13343 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:40:55.288666   13343 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:40:55.318596   13343 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:40:55.318620   13343 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:40:55.318630   13343 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:40:55.318636   13343 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:40:55.318640   13343 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:40:55.318647   13343 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:40:55.318651   13343 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:40:55.318654   13343 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:40:55.318657   13343 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:40:55.318663   13343 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:40:55.318670   13343 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:40:55.318673   13343 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:40:55.318680   13343 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:40:55.318683   13343 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:40:55.318687   13343 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:40:55.318692   13343 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:40:55.318698   13343 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:40:55.318701   13343 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:40:55.318704   13343 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:40:55.318712   13343 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:40:55.318717   13343 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:40:55.318720   13343 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:40:55.318723   13343 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:40:55.318726   13343 cri.go:89] found id: ""
	I1126 19:40:55.318779   13343 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:40:55.334536   13343 out.go:203] 
	W1126 19:40:55.337656   13343 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:40:55.337688   13343 out.go:285] * 
	* 
	W1126 19:40:55.342519   13343 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:40:55.345420   13343 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (54.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (4.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-152801 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-152801 --alsologtostderr -v=1: exit status 11 (293.300405ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:39:58.850135   11511 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:39:58.850377   11511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:58.850406   11511 out.go:374] Setting ErrFile to fd 2...
	I1126 19:39:58.850425   11511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:58.850695   11511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:39:58.851003   11511 mustload.go:66] Loading cluster: addons-152801
	I1126 19:39:58.851407   11511 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:58.851443   11511 addons.go:622] checking whether the cluster is paused
	I1126 19:39:58.851586   11511 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:58.851616   11511 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:39:58.852158   11511 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:39:58.869555   11511 ssh_runner.go:195] Run: systemctl --version
	I1126 19:39:58.869610   11511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:39:58.888601   11511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:39:58.994060   11511 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:39:58.994140   11511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:39:59.047171   11511 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:39:59.047189   11511 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:39:59.047193   11511 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:39:59.047197   11511 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:39:59.047200   11511 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:39:59.047207   11511 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:39:59.047211   11511 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:39:59.047214   11511 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:39:59.047217   11511 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:39:59.047222   11511 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:39:59.047226   11511 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:39:59.047229   11511 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:39:59.047232   11511 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:39:59.047234   11511 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:39:59.047237   11511 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:39:59.047242   11511 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:39:59.047245   11511 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:39:59.047248   11511 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:39:59.047252   11511 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:39:59.047254   11511 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:39:59.047259   11511 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:39:59.047262   11511 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:39:59.047265   11511 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:39:59.047267   11511 cri.go:89] found id: ""
	I1126 19:39:59.047312   11511 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:39:59.074641   11511 out.go:203] 
	W1126 19:39:59.078536   11511 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:39:59.078557   11511 out.go:285] * 
	* 
	W1126 19:39:59.085737   11511 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:39:59.088775   11511 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-152801 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-152801
helpers_test.go:243: (dbg) docker inspect addons-152801:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae",
	        "Created": "2025-11-26T19:37:09.20678067Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5287,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T19:37:09.272629667Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae/hosts",
	        "LogPath": "/var/lib/docker/containers/3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae/3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae-json.log",
	        "Name": "/addons-152801",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-152801:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-152801",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae",
	                "LowerDir": "/var/lib/docker/overlay2/a388f63ff930544e473204efaaf20b3bd5bc52e2d648ced1b77967bf09bdd5bc-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a388f63ff930544e473204efaaf20b3bd5bc52e2d648ced1b77967bf09bdd5bc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a388f63ff930544e473204efaaf20b3bd5bc52e2d648ced1b77967bf09bdd5bc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a388f63ff930544e473204efaaf20b3bd5bc52e2d648ced1b77967bf09bdd5bc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-152801",
	                "Source": "/var/lib/docker/volumes/addons-152801/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-152801",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-152801",
	                "name.minikube.sigs.k8s.io": "addons-152801",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e584929d8dbb29efc932d6f088e2f19fb3e810e31669f8c94ce81e02c8703a76",
	            "SandboxKey": "/var/run/docker/netns/e584929d8dbb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-152801": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:06:28:d3:80:4b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "791264f8919751140113e621337947c00a2209ef659bb8a64a18b76705940d76",
	                    "EndpointID": "cc076c0fd6f8620c858df9b21ee74d7fe98ec959e15d20ce2fd4a668cba9060c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-152801",
	                        "3f8d1177ed55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-152801 -n addons-152801
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-152801 logs -n 25: (2.420646429s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-343127 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-343127   │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:36 UTC │
	│ delete  │ -p download-only-343127                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-343127   │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:36 UTC │
	│ start   │ -o=json --download-only -p download-only-163348 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-163348   │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:36 UTC │
	│ delete  │ -p download-only-163348                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-163348   │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:36 UTC │
	│ delete  │ -p download-only-343127                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-343127   │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:36 UTC │
	│ delete  │ -p download-only-163348                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-163348   │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:36 UTC │
	│ start   │ --download-only -p download-docker-938641 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-938641 │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │                     │
	│ delete  │ -p download-docker-938641                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-938641 │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:36 UTC │
	│ start   │ --download-only -p binary-mirror-453571 --alsologtostderr --binary-mirror http://127.0.0.1:34029 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-453571   │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │                     │
	│ delete  │ -p binary-mirror-453571                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-453571   │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:36 UTC │
	│ addons  │ enable dashboard -p addons-152801                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │                     │
	│ addons  │ disable dashboard -p addons-152801                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │                     │
	│ start   │ -p addons-152801 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:39 UTC │
	│ addons  │ addons-152801 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ addons  │ addons-152801 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ addons  │ addons-152801 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ addons  │ addons-152801 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ ip      │ addons-152801 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │ 26 Nov 25 19:39 UTC │
	│ addons  │ addons-152801 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ addons  │ addons-152801 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ addons  │ enable headlamp -p addons-152801 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │                     │
	│ ssh     │ addons-152801 ssh cat /opt/local-path-provisioner/pvc-6c7297e5-0e4c-403d-b89a-2e241166a087_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-152801          │ jenkins │ v1.37.0 │ 26 Nov 25 19:39 UTC │ 26 Nov 25 19:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:36:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:36:43.471931    4888 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:36:43.472045    4888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:36:43.472056    4888 out.go:374] Setting ErrFile to fd 2...
	I1126 19:36:43.472062    4888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:36:43.472303    4888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:36:43.472724    4888 out.go:368] Setting JSON to false
	I1126 19:36:43.473416    4888 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1134,"bootTime":1764184670,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 19:36:43.473479    4888 start.go:143] virtualization:  
	I1126 19:36:43.475110    4888 out.go:179] * [addons-152801] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 19:36:43.476472    4888 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:36:43.476563    4888 notify.go:221] Checking for updates...
	I1126 19:36:43.479166    4888 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:36:43.480543    4888 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 19:36:43.481717    4888 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 19:36:43.482826    4888 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 19:36:43.484055    4888 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:36:43.485460    4888 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:36:43.506176    4888 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 19:36:43.506308    4888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:36:43.568905    4888 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-26 19:36:43.559447899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 19:36:43.569006    4888 docker.go:319] overlay module found
	I1126 19:36:43.570400    4888 out.go:179] * Using the docker driver based on user configuration
	I1126 19:36:43.571643    4888 start.go:309] selected driver: docker
	I1126 19:36:43.571666    4888 start.go:927] validating driver "docker" against <nil>
	I1126 19:36:43.571679    4888 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:36:43.572421    4888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:36:43.622770    4888 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-26 19:36:43.614433972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 19:36:43.622928    4888 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 19:36:43.623140    4888 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 19:36:43.624633    4888 out.go:179] * Using Docker driver with root privileges
	I1126 19:36:43.625909    4888 cni.go:84] Creating CNI manager for ""
	I1126 19:36:43.626005    4888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:36:43.626013    4888 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 19:36:43.626091    4888 start.go:353] cluster config:
	{Name:addons-152801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-152801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1126 19:36:43.628484    4888 out.go:179] * Starting "addons-152801" primary control-plane node in "addons-152801" cluster
	I1126 19:36:43.629728    4888 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 19:36:43.631057    4888 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 19:36:43.632380    4888 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:36:43.632420    4888 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 19:36:43.632432    4888 cache.go:65] Caching tarball of preloaded images
	I1126 19:36:43.632452    4888 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 19:36:43.632524    4888 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 19:36:43.632535    4888 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 19:36:43.632884    4888 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/config.json ...
	I1126 19:36:43.632938    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/config.json: {Name:mk5d289ab55aa4f11a8101e03a097106e1da928c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:36:43.648105    4888 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1126 19:36:43.648225    4888 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1126 19:36:43.648249    4888 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1126 19:36:43.648255    4888 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1126 19:36:43.648262    4888 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1126 19:36:43.648267    4888 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from local cache
	I1126 19:37:01.543984    4888 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b from cached tarball
	I1126 19:37:01.544021    4888 cache.go:243] Successfully downloaded all kic artifacts
	I1126 19:37:01.544057    4888 start.go:360] acquireMachinesLock for addons-152801: {Name:mk24b9e69899438b99e9d16cbbe183077c32e652 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 19:37:01.544188    4888 start.go:364] duration metric: took 104.529µs to acquireMachinesLock for "addons-152801"
	I1126 19:37:01.544215    4888 start.go:93] Provisioning new machine with config: &{Name:addons-152801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-152801 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 19:37:01.544301    4888 start.go:125] createHost starting for "" (driver="docker")
	I1126 19:37:01.547652    4888 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1126 19:37:01.547886    4888 start.go:159] libmachine.API.Create for "addons-152801" (driver="docker")
	I1126 19:37:01.547920    4888 client.go:173] LocalClient.Create starting
	I1126 19:37:01.548027    4888 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem
	I1126 19:37:01.891695    4888 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem
	I1126 19:37:02.208986    4888 cli_runner.go:164] Run: docker network inspect addons-152801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 19:37:02.226313    4888 cli_runner.go:211] docker network inspect addons-152801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 19:37:02.226398    4888 network_create.go:284] running [docker network inspect addons-152801] to gather additional debugging logs...
	I1126 19:37:02.226420    4888 cli_runner.go:164] Run: docker network inspect addons-152801
	W1126 19:37:02.241968    4888 cli_runner.go:211] docker network inspect addons-152801 returned with exit code 1
	I1126 19:37:02.241999    4888 network_create.go:287] error running [docker network inspect addons-152801]: docker network inspect addons-152801: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-152801 not found
	I1126 19:37:02.242013    4888 network_create.go:289] output of [docker network inspect addons-152801]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-152801 not found
	
	** /stderr **
	I1126 19:37:02.242146    4888 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 19:37:02.258644    4888 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b3f430}
	I1126 19:37:02.258691    4888 network_create.go:124] attempt to create docker network addons-152801 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1126 19:37:02.258793    4888 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-152801 addons-152801
	I1126 19:37:02.325280    4888 network_create.go:108] docker network addons-152801 192.168.49.0/24 created
	I1126 19:37:02.325311    4888 kic.go:121] calculated static IP "192.168.49.2" for the "addons-152801" container
	I1126 19:37:02.325390    4888 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 19:37:02.341875    4888 cli_runner.go:164] Run: docker volume create addons-152801 --label name.minikube.sigs.k8s.io=addons-152801 --label created_by.minikube.sigs.k8s.io=true
	I1126 19:37:02.360001    4888 oci.go:103] Successfully created a docker volume addons-152801
	I1126 19:37:02.360100    4888 cli_runner.go:164] Run: docker run --rm --name addons-152801-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-152801 --entrypoint /usr/bin/test -v addons-152801:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 19:37:04.665797    4888 cli_runner.go:217] Completed: docker run --rm --name addons-152801-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-152801 --entrypoint /usr/bin/test -v addons-152801:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib: (2.305657124s)
	I1126 19:37:04.665825    4888 oci.go:107] Successfully prepared a docker volume addons-152801
	I1126 19:37:04.665872    4888 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:37:04.665889    4888 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 19:37:04.665989    4888 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-152801:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 19:37:09.127055    4888 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-152801:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.461019169s)
	I1126 19:37:09.127089    4888 kic.go:203] duration metric: took 4.461197008s to extract preloaded images to volume ...
	W1126 19:37:09.127232    4888 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1126 19:37:09.127348    4888 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 19:37:09.192533    4888 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-152801 --name addons-152801 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-152801 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-152801 --network addons-152801 --ip 192.168.49.2 --volume addons-152801:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 19:37:09.523927    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Running}}
	I1126 19:37:09.549914    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:09.576481    4888 cli_runner.go:164] Run: docker exec addons-152801 stat /var/lib/dpkg/alternatives/iptables
	I1126 19:37:09.624082    4888 oci.go:144] the created container "addons-152801" has a running status.
	I1126 19:37:09.624122    4888 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa...
	I1126 19:37:09.906846    4888 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 19:37:09.939699    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:09.968877    4888 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 19:37:09.968898    4888 kic_runner.go:114] Args: [docker exec --privileged addons-152801 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 19:37:10.018076    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:10.036995    4888 machine.go:94] provisionDockerMachine start ...
	I1126 19:37:10.037093    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:10.055559    4888 main.go:143] libmachine: Using SSH client type: native
	I1126 19:37:10.055886    4888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:37:10.055901    4888 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 19:37:10.056661    4888 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1126 19:37:13.201186    4888 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-152801
	
	I1126 19:37:13.201209    4888 ubuntu.go:182] provisioning hostname "addons-152801"
	I1126 19:37:13.201271    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:13.218714    4888 main.go:143] libmachine: Using SSH client type: native
	I1126 19:37:13.219026    4888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:37:13.219046    4888 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-152801 && echo "addons-152801" | sudo tee /etc/hostname
	I1126 19:37:13.375843    4888 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-152801
	
	I1126 19:37:13.375947    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:13.394468    4888 main.go:143] libmachine: Using SSH client type: native
	I1126 19:37:13.394779    4888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:37:13.394801    4888 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-152801' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-152801/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-152801' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 19:37:13.542056    4888 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 19:37:13.542077    4888 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 19:37:13.542104    4888 ubuntu.go:190] setting up certificates
	I1126 19:37:13.542124    4888 provision.go:84] configureAuth start
	I1126 19:37:13.542180    4888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-152801
	I1126 19:37:13.558885    4888 provision.go:143] copyHostCerts
	I1126 19:37:13.558967    4888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 19:37:13.559087    4888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 19:37:13.559150    4888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 19:37:13.559224    4888 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.addons-152801 san=[127.0.0.1 192.168.49.2 addons-152801 localhost minikube]
	I1126 19:37:13.623176    4888 provision.go:177] copyRemoteCerts
	I1126 19:37:13.623240    4888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 19:37:13.623317    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:13.639937    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:13.745315    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 19:37:13.761902    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 19:37:13.780087    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 19:37:13.797027    4888 provision.go:87] duration metric: took 254.879005ms to configureAuth
	I1126 19:37:13.797052    4888 ubuntu.go:206] setting minikube options for container-runtime
	I1126 19:37:13.797236    4888 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:13.797346    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:13.814456    4888 main.go:143] libmachine: Using SSH client type: native
	I1126 19:37:13.814773    4888 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1126 19:37:13.814791    4888 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 19:37:14.113275    4888 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 19:37:14.113298    4888 machine.go:97] duration metric: took 4.076284458s to provisionDockerMachine
	I1126 19:37:14.113310    4888 client.go:176] duration metric: took 12.565383136s to LocalClient.Create
	I1126 19:37:14.113349    4888 start.go:167] duration metric: took 12.565464018s to libmachine.API.Create "addons-152801"
	I1126 19:37:14.113361    4888 start.go:293] postStartSetup for "addons-152801" (driver="docker")
	I1126 19:37:14.113371    4888 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 19:37:14.113449    4888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 19:37:14.113495    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:14.131658    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:14.238902    4888 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 19:37:14.242126    4888 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 19:37:14.242158    4888 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 19:37:14.242171    4888 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 19:37:14.242281    4888 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 19:37:14.242312    4888 start.go:296] duration metric: took 128.945546ms for postStartSetup
	I1126 19:37:14.242628    4888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-152801
	I1126 19:37:14.258948    4888 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/config.json ...
	I1126 19:37:14.259217    4888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 19:37:14.259264    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:14.275073    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:14.374768    4888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 19:37:14.379280    4888 start.go:128] duration metric: took 12.834962865s to createHost
	I1126 19:37:14.379307    4888 start.go:83] releasing machines lock for "addons-152801", held for 12.835109558s
	I1126 19:37:14.379379    4888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-152801
	I1126 19:37:14.395842    4888 ssh_runner.go:195] Run: cat /version.json
	I1126 19:37:14.395901    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:14.396157    4888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 19:37:14.396214    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:14.414912    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:14.414961    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:14.601635    4888 ssh_runner.go:195] Run: systemctl --version
	I1126 19:37:14.607928    4888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 19:37:14.641938    4888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 19:37:14.645980    4888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 19:37:14.646049    4888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 19:37:14.673506    4888 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1126 19:37:14.673533    4888 start.go:496] detecting cgroup driver to use...
	I1126 19:37:14.673563    4888 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 19:37:14.673613    4888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 19:37:14.691340    4888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 19:37:14.703906    4888 docker.go:218] disabling cri-docker service (if available) ...
	I1126 19:37:14.703966    4888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 19:37:14.721107    4888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 19:37:14.738559    4888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 19:37:14.853514    4888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 19:37:14.981371    4888 docker.go:234] disabling docker service ...
	I1126 19:37:14.981476    4888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 19:37:15.002351    4888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 19:37:15.015278    4888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 19:37:15.138326    4888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 19:37:15.272598    4888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 19:37:15.285197    4888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 19:37:15.299364    4888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 19:37:15.299499    4888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.307883    4888 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 19:37:15.307953    4888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.316089    4888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.324178    4888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.332437    4888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 19:37:15.339913    4888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.348150    4888 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.360471    4888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:37:15.369604    4888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 19:37:15.376487    4888 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1126 19:37:15.376569    4888 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1126 19:37:15.389889    4888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 19:37:15.397214    4888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:37:15.508203    4888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 19:37:15.681179    4888 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 19:37:15.681258    4888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 19:37:15.684749    4888 start.go:564] Will wait 60s for crictl version
	I1126 19:37:15.684809    4888 ssh_runner.go:195] Run: which crictl
	I1126 19:37:15.688029    4888 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 19:37:15.711675    4888 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 19:37:15.711865    4888 ssh_runner.go:195] Run: crio --version
	I1126 19:37:15.739135    4888 ssh_runner.go:195] Run: crio --version
	I1126 19:37:15.770939    4888 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 19:37:15.773829    4888 cli_runner.go:164] Run: docker network inspect addons-152801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 19:37:15.790896    4888 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 19:37:15.794677    4888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 19:37:15.804137    4888 kubeadm.go:884] updating cluster {Name:addons-152801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-152801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 19:37:15.804266    4888 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:37:15.804324    4888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 19:37:15.835636    4888 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 19:37:15.835659    4888 crio.go:433] Images already preloaded, skipping extraction
	I1126 19:37:15.835711    4888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 19:37:15.860167    4888 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 19:37:15.860187    4888 cache_images.go:86] Images are preloaded, skipping loading
	I1126 19:37:15.860194    4888 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1126 19:37:15.860279    4888 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-152801 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-152801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 19:37:15.860354    4888 ssh_runner.go:195] Run: crio config
	I1126 19:37:15.919721    4888 cni.go:84] Creating CNI manager for ""
	I1126 19:37:15.919744    4888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:37:15.919760    4888 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 19:37:15.919782    4888 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-152801 NodeName:addons-152801 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 19:37:15.919901    4888 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-152801"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 19:37:15.919972    4888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 19:37:15.927274    4888 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 19:37:15.927384    4888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 19:37:15.934603    4888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1126 19:37:15.946719    4888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 19:37:15.959891    4888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1126 19:37:15.973373    4888 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1126 19:37:15.976889    4888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 19:37:15.985957    4888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:37:16.104046    4888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 19:37:16.121408    4888 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801 for IP: 192.168.49.2
	I1126 19:37:16.121471    4888 certs.go:195] generating shared ca certs ...
	I1126 19:37:16.121502    4888 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.121672    4888 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 19:37:16.336218    4888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt ...
	I1126 19:37:16.336253    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt: {Name:mk1b923187d4898357dbd217efb8f9b56f4fbed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.336456    4888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key ...
	I1126 19:37:16.336469    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key: {Name:mk0788bd3c53229948f8b98862d3eac560ece077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.336558    4888 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 19:37:16.796660    4888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt ...
	I1126 19:37:16.796694    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt: {Name:mkde51b7eb553204dc595950bd053b1cf1ad5c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.796926    4888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key ...
	I1126 19:37:16.796941    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key: {Name:mk07e2e19752c685127490fe5215034231ad2787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.797020    4888 certs.go:257] generating profile certs ...
	I1126 19:37:16.797083    4888 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.key
	I1126 19:37:16.797101    4888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt with IP's: []
	I1126 19:37:16.858565    4888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt ...
	I1126 19:37:16.858589    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: {Name:mk0325bae6f46d1e86b77469f940616a7bd8ec12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.858757    4888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.key ...
	I1126 19:37:16.858768    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.key: {Name:mk3a1a6a6babfaa19e586c3fd90f05ff1f5f860f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:16.858848    4888 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.key.2818624a
	I1126 19:37:16.858871    4888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.crt.2818624a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1126 19:37:17.141299    4888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.crt.2818624a ...
	I1126 19:37:17.141327    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.crt.2818624a: {Name:mkd7e08b835ca007230c0f777379c969a78ac7ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:17.141515    4888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.key.2818624a ...
	I1126 19:37:17.141532    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.key.2818624a: {Name:mkc6d9c0146a117e15524692f93872472975ca75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:17.141614    4888 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.crt.2818624a -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.crt
	I1126 19:37:17.141696    4888 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.key.2818624a -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.key
	I1126 19:37:17.141751    4888 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.key
	I1126 19:37:17.141770    4888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.crt with IP's: []
	I1126 19:37:17.366071    4888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.crt ...
	I1126 19:37:17.366100    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.crt: {Name:mkcad830facb8aebfe64c6768d11d47b8b95fd38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:17.366271    4888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.key ...
	I1126 19:37:17.366283    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.key: {Name:mkf21b255d15ba02ea5b7a6b68ab2574110a3e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:17.366467    4888 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 19:37:17.366510    4888 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 19:37:17.366541    4888 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 19:37:17.366571    4888 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 19:37:17.367112    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 19:37:17.385441    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 19:37:17.403032    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 19:37:17.421075    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 19:37:17.438407    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 19:37:17.454859    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 19:37:17.471482    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 19:37:17.488199    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 19:37:17.504760    4888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 19:37:17.521628    4888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 19:37:17.534136    4888 ssh_runner.go:195] Run: openssl version
	I1126 19:37:17.540119    4888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 19:37:17.547979    4888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:37:17.551354    4888 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:37:17.551417    4888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:37:17.592059    4888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 19:37:17.600110    4888 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 19:37:17.603480    4888 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 19:37:17.603531    4888 kubeadm.go:401] StartCluster: {Name:addons-152801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-152801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:37:17.603615    4888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:37:17.603671    4888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:37:17.635728    4888 cri.go:89] found id: ""
	I1126 19:37:17.635793    4888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 19:37:17.643261    4888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 19:37:17.650576    4888 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 19:37:17.650641    4888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 19:37:17.658169    4888 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 19:37:17.658189    4888 kubeadm.go:158] found existing configuration files:
	
	I1126 19:37:17.658258    4888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 19:37:17.665505    4888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 19:37:17.665576    4888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 19:37:17.672672    4888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 19:37:17.680057    4888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 19:37:17.680118    4888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 19:37:17.686868    4888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 19:37:17.693834    4888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 19:37:17.693905    4888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 19:37:17.700747    4888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 19:37:17.707752    4888 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 19:37:17.707822    4888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 19:37:17.714851    4888 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 19:37:17.766126    4888 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 19:37:17.766538    4888 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 19:37:17.790376    4888 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 19:37:17.790450    4888 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1126 19:37:17.790491    4888 kubeadm.go:319] OS: Linux
	I1126 19:37:17.790543    4888 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 19:37:17.790596    4888 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1126 19:37:17.790646    4888 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 19:37:17.790698    4888 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 19:37:17.790750    4888 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 19:37:17.790801    4888 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 19:37:17.790850    4888 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 19:37:17.790903    4888 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 19:37:17.790953    4888 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1126 19:37:17.860525    4888 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 19:37:17.860689    4888 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 19:37:17.860840    4888 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 19:37:17.869078    4888 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 19:37:17.876080    4888 out.go:252]   - Generating certificates and keys ...
	I1126 19:37:17.876180    4888 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 19:37:17.876252    4888 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 19:37:18.318969    4888 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 19:37:18.638932    4888 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 19:37:18.907767    4888 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 19:37:19.026106    4888 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 19:37:19.296349    4888 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 19:37:19.296717    4888 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-152801 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1126 19:37:19.814329    4888 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 19:37:19.814680    4888 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-152801 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1126 19:37:20.288255    4888 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 19:37:20.464183    4888 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 19:37:20.714280    4888 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 19:37:20.714352    4888 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 19:37:20.890864    4888 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 19:37:21.408788    4888 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 19:37:21.814596    4888 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 19:37:22.334456    4888 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 19:37:22.678326    4888 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 19:37:22.678867    4888 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 19:37:22.681428    4888 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 19:37:22.685377    4888 out.go:252]   - Booting up control plane ...
	I1126 19:37:22.685482    4888 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 19:37:22.685568    4888 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 19:37:22.685642    4888 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 19:37:22.699735    4888 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 19:37:22.699953    4888 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 19:37:22.707894    4888 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 19:37:22.708471    4888 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 19:37:22.708607    4888 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 19:37:22.842388    4888 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 19:37:22.842524    4888 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 19:37:24.342631    4888 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501920336s
	I1126 19:37:24.346154    4888 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 19:37:24.346252    4888 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1126 19:37:24.346506    4888 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 19:37:24.346597    4888 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 19:37:27.377271    4888 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.030615825s
	I1126 19:37:28.528883    4888 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.182731145s
	I1126 19:37:30.347606    4888 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001313133s
	I1126 19:37:30.366979    4888 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 19:37:30.382229    4888 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 19:37:30.396942    4888 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 19:37:30.397165    4888 kubeadm.go:319] [mark-control-plane] Marking the node addons-152801 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 19:37:30.409045    4888 kubeadm.go:319] [bootstrap-token] Using token: 9vmpoi.nosh8iympne0717j
	I1126 19:37:30.412163    4888 out.go:252]   - Configuring RBAC rules ...
	I1126 19:37:30.412293    4888 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 19:37:30.418167    4888 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 19:37:30.427501    4888 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 19:37:30.439590    4888 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 19:37:30.445405    4888 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 19:37:30.450693    4888 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 19:37:30.754483    4888 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 19:37:31.185033    4888 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 19:37:31.756620    4888 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 19:37:31.757674    4888 kubeadm.go:319] 
	I1126 19:37:31.757747    4888 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 19:37:31.757753    4888 kubeadm.go:319] 
	I1126 19:37:31.757830    4888 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 19:37:31.757837    4888 kubeadm.go:319] 
	I1126 19:37:31.757863    4888 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 19:37:31.757942    4888 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 19:37:31.757994    4888 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 19:37:31.757998    4888 kubeadm.go:319] 
	I1126 19:37:31.758052    4888 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 19:37:31.758056    4888 kubeadm.go:319] 
	I1126 19:37:31.758105    4888 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 19:37:31.758113    4888 kubeadm.go:319] 
	I1126 19:37:31.758165    4888 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 19:37:31.758240    4888 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 19:37:31.758308    4888 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 19:37:31.758312    4888 kubeadm.go:319] 
	I1126 19:37:31.758396    4888 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 19:37:31.758472    4888 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 19:37:31.758477    4888 kubeadm.go:319] 
	I1126 19:37:31.758560    4888 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9vmpoi.nosh8iympne0717j \
	I1126 19:37:31.758663    4888 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a \
	I1126 19:37:31.758683    4888 kubeadm.go:319] 	--control-plane 
	I1126 19:37:31.758687    4888 kubeadm.go:319] 
	I1126 19:37:31.758771    4888 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 19:37:31.758775    4888 kubeadm.go:319] 
	I1126 19:37:31.758857    4888 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9vmpoi.nosh8iympne0717j \
	I1126 19:37:31.758959    4888 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a 
	I1126 19:37:31.761445    4888 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1126 19:37:31.761690    4888 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1126 19:37:31.761821    4888 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 19:37:31.761854    4888 cni.go:84] Creating CNI manager for ""
	I1126 19:37:31.761862    4888 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:37:31.766852    4888 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1126 19:37:31.770528    4888 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 19:37:31.774831    4888 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 19:37:31.774851    4888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 19:37:31.788486    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 19:37:32.071130    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:32.071229    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-152801 minikube.k8s.io/updated_at=2025_11_26T19_37_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=addons-152801 minikube.k8s.io/primary=true
	I1126 19:37:32.071281    4888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 19:37:32.221847    4888 ops.go:34] apiserver oom_adj: -16
	I1126 19:37:32.221993    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:32.722632    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:33.222387    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:33.722900    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:34.222963    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:34.722530    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:35.223023    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:35.722671    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:36.222628    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:36.722043    4888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 19:37:36.889307    4888 kubeadm.go:1114] duration metric: took 4.818233192s to wait for elevateKubeSystemPrivileges
	I1126 19:37:36.889334    4888 kubeadm.go:403] duration metric: took 19.285808303s to StartCluster
	I1126 19:37:36.889351    4888 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:36.889470    4888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 19:37:36.889793    4888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:37:36.889998    4888 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 19:37:36.890169    4888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 19:37:36.890305    4888 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1126 19:37:36.890401    4888 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:36.890405    4888 addons.go:70] Setting yakd=true in profile "addons-152801"
	I1126 19:37:36.890427    4888 addons.go:239] Setting addon yakd=true in "addons-152801"
	I1126 19:37:36.890435    4888 addons.go:70] Setting inspektor-gadget=true in profile "addons-152801"
	I1126 19:37:36.890445    4888 addons.go:239] Setting addon inspektor-gadget=true in "addons-152801"
	I1126 19:37:36.890451    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.890463    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.890915    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.890962    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.891405    4888 addons.go:70] Setting metrics-server=true in profile "addons-152801"
	I1126 19:37:36.891430    4888 addons.go:239] Setting addon metrics-server=true in "addons-152801"
	I1126 19:37:36.891454    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.891858    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.892055    4888 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-152801"
	I1126 19:37:36.892078    4888 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-152801"
	I1126 19:37:36.892100    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.892492    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.896068    4888 addons.go:70] Setting cloud-spanner=true in profile "addons-152801"
	I1126 19:37:36.896095    4888 addons.go:239] Setting addon cloud-spanner=true in "addons-152801"
	I1126 19:37:36.896124    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.896650    4888 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-152801"
	I1126 19:37:36.896697    4888 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-152801"
	I1126 19:37:36.896725    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.897131    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.898229    4888 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-152801"
	I1126 19:37:36.898447    4888 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-152801"
	I1126 19:37:36.898497    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.898970    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.903038    4888 addons.go:70] Setting default-storageclass=true in profile "addons-152801"
	I1126 19:37:36.906366    4888 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-152801"
	I1126 19:37:36.906815    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.898354    4888 addons.go:70] Setting registry-creds=true in profile "addons-152801"
	I1126 19:37:36.911379    4888 addons.go:239] Setting addon registry-creds=true in "addons-152801"
	I1126 19:37:36.911456    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.898361    4888 addons.go:70] Setting storage-provisioner=true in profile "addons-152801"
	I1126 19:37:36.927544    4888 addons.go:239] Setting addon storage-provisioner=true in "addons-152801"
	I1126 19:37:36.930062    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.930660    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.935209    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.898368    4888 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-152801"
	I1126 19:37:36.948592    4888 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-152801"
	I1126 19:37:36.949035    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.898374    4888 addons.go:70] Setting volcano=true in profile "addons-152801"
	I1126 19:37:36.990622    4888 addons.go:239] Setting addon volcano=true in "addons-152801"
	I1126 19:37:36.990666    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:36.991133    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.898379    4888 addons.go:70] Setting volumesnapshots=true in profile "addons-152801"
	I1126 19:37:37.004611    4888 addons.go:239] Setting addon volumesnapshots=true in "addons-152801"
	I1126 19:37:37.004663    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.005127    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:37.007606    4888 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1126 19:37:37.011140    4888 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1126 19:37:37.011212    4888 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1126 19:37:37.011304    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:36.903054    4888 addons.go:70] Setting gcp-auth=true in profile "addons-152801"
	I1126 19:37:37.015505    4888 mustload.go:66] Loading cluster: addons-152801
	I1126 19:37:37.015704    4888 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:37:37.015975    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.903063    4888 addons.go:70] Setting ingress=true in profile "addons-152801"
	I1126 19:37:37.044761    4888 addons.go:239] Setting addon ingress=true in "addons-152801"
	I1126 19:37:37.044866    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.045465    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:37.061559    4888 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1126 19:37:37.063699    4888 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1126 19:37:37.065751    4888 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1126 19:37:37.065771    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1126 19:37:37.065864    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:36.903070    4888 addons.go:70] Setting ingress-dns=true in profile "addons-152801"
	I1126 19:37:37.067930    4888 addons.go:239] Setting addon ingress-dns=true in "addons-152801"
	I1126 19:37:37.067977    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.068439    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:36.898343    4888 addons.go:70] Setting registry=true in profile "addons-152801"
	I1126 19:37:37.091209    4888 addons.go:239] Setting addon registry=true in "addons-152801"
	I1126 19:37:37.091249    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.091713    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:37.100722    4888 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1126 19:37:37.100742    4888 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1126 19:37:37.100811    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:36.905907    4888 out.go:179] * Verifying Kubernetes components...
	I1126 19:37:37.126106    4888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:37:36.927374    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:37.142534    4888 addons.go:239] Setting addon default-storageclass=true in "addons-152801"
	I1126 19:37:37.142571    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.143117    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:37.183125    4888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 19:37:37.183617    4888 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1126 19:37:37.194048    4888 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1126 19:37:37.194070    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1126 19:37:37.198923    4888 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1126 19:37:37.201252    4888 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 19:37:37.201759    4888 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1126 19:37:37.201774    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1126 19:37:37.201857    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.194130    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.194139    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1126 19:37:37.230653    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1126 19:37:37.233671    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1126 19:37:37.234808    4888 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1126 19:37:37.235453    4888 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 19:37:37.235466    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 19:37:37.235531    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.247513    4888 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1126 19:37:37.247532    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1126 19:37:37.247592    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	W1126 19:37:37.257632    4888 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1126 19:37:37.261858    4888 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-152801"
	I1126 19:37:37.261897    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.262318    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:37.282495    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.288050    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1126 19:37:37.290991    4888 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1126 19:37:37.293840    4888 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1126 19:37:37.293861    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1126 19:37:37.294034    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.294201    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1126 19:37:37.297472    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1126 19:37:37.300633    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:37.302274    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1126 19:37:37.325735    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1126 19:37:37.329290    4888 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1126 19:37:37.332091    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1126 19:37:37.332114    4888 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1126 19:37:37.332195    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.337081    4888 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1126 19:37:37.337124    4888 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1126 19:37:37.337191    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.338068    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.348242    4888 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1126 19:37:37.352485    4888 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:37:37.352544    4888 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1126 19:37:37.352775    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1126 19:37:37.352848    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.366542    4888 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 19:37:37.366559    4888 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 19:37:37.366618    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.382896    4888 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:37:37.385906    4888 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1126 19:37:37.391483    4888 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1126 19:37:37.391505    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1126 19:37:37.391619    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.414592    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.415972    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.416913    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.429834    4888 out.go:179]   - Using image docker.io/registry:3.0.0
	I1126 19:37:37.433663    4888 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1126 19:37:37.442036    4888 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1126 19:37:37.442059    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1126 19:37:37.442137    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.451566    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.485744    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.506905    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.540993    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.546403    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.552103    4888 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1126 19:37:37.557322    4888 out.go:179]   - Using image docker.io/busybox:stable
	I1126 19:37:37.560495    4888 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1126 19:37:37.560522    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1126 19:37:37.560581    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:37.574738    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.576134    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.592711    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.603941    4888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 19:37:37.606384    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:37.632450    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:38.010191    4888 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1126 19:37:38.010211    4888 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1126 19:37:38.071693    4888 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1126 19:37:38.071716    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1126 19:37:38.122435    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 19:37:38.126569    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1126 19:37:38.199250    4888 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1126 19:37:38.199277    4888 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1126 19:37:38.207965    4888 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1126 19:37:38.207994    4888 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1126 19:37:38.212922    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1126 19:37:38.212949    4888 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1126 19:37:38.218969    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1126 19:37:38.224303    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1126 19:37:38.225559    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1126 19:37:38.234907    4888 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1126 19:37:38.234948    4888 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1126 19:37:38.236518    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1126 19:37:38.237430    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1126 19:37:38.255756    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 19:37:38.266870    4888 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1126 19:37:38.266898    4888 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1126 19:37:38.268286    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1126 19:37:38.268306    4888 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1126 19:37:38.293770    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1126 19:37:38.297313    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1126 19:37:38.298655    4888 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1126 19:37:38.298676    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1126 19:37:38.367508    4888 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1126 19:37:38.367538    4888 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1126 19:37:38.379397    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1126 19:37:38.379441    4888 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1126 19:37:38.392964    4888 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1126 19:37:38.392991    4888 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1126 19:37:38.433173    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1126 19:37:38.466906    4888 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1126 19:37:38.466932    4888 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1126 19:37:38.531336    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1126 19:37:38.531377    4888 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1126 19:37:38.547877    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1126 19:37:38.547904    4888 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1126 19:37:38.594638    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1126 19:37:38.607650    4888 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1126 19:37:38.607676    4888 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1126 19:37:38.628336    4888 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.445120824s)
	I1126 19:37:38.628374    4888 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1126 19:37:38.629486    4888 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.025516135s)
	I1126 19:37:38.630195    4888 node_ready.go:35] waiting up to 6m0s for node "addons-152801" to be "Ready" ...
	I1126 19:37:38.728407    4888 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1126 19:37:38.728433    4888 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1126 19:37:38.799156    4888 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1126 19:37:38.799186    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1126 19:37:38.825316    4888 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:37:38.825341    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1126 19:37:38.825804    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1126 19:37:38.875961    4888 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1126 19:37:38.875986    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1126 19:37:39.008084    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:37:39.117378    4888 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1126 19:37:39.117404    4888 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1126 19:37:39.132799    4888 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-152801" context rescaled to 1 replicas
	I1126 19:37:39.417001    4888 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1126 19:37:39.417024    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1126 19:37:39.642407    4888 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1126 19:37:39.642432    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1126 19:37:39.819138    4888 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1126 19:37:39.819161    4888 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1126 19:37:39.967278    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1126 19:37:40.643200    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:41.175326    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.052816156s)
	I1126 19:37:41.175433    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.048800725s)
	I1126 19:37:41.175458    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.956466949s)
	I1126 19:37:41.984385    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.760046174s)
	I1126 19:37:41.984689    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.759097139s)
	I1126 19:37:41.984744    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.74820781s)
	I1126 19:37:41.984830    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.747348921s)
	I1126 19:37:41.984882    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.729103078s)
	W1126 19:37:42.064655    4888 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1126 19:37:42.294087    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.00027886s)
	I1126 19:37:42.881475    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.286807616s)
	I1126 19:37:42.881552    4888 addons.go:495] Verifying addon metrics-server=true in "addons-152801"
	I1126 19:37:42.881625    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.05579855s)
	I1126 19:37:42.881364    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.448150738s)
	I1126 19:37:42.881868    4888 addons.go:495] Verifying addon registry=true in "addons-152801"
	I1126 19:37:42.881999    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.873879021s)
	W1126 19:37:42.882035    4888 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1126 19:37:42.882078    4888 retry.go:31] will retry after 208.686382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1126 19:37:42.882210    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.584873996s)
	I1126 19:37:42.882222    4888 addons.go:495] Verifying addon ingress=true in "addons-152801"
	I1126 19:37:42.884952    4888 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-152801 service yakd-dashboard -n yakd-dashboard
	
	I1126 19:37:42.885009    4888 out.go:179] * Verifying registry addon...
	I1126 19:37:42.886993    4888 out.go:179] * Verifying ingress addon...
	I1126 19:37:42.887754    4888 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1126 19:37:42.890766    4888 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1126 19:37:42.895688    4888 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1126 19:37:42.895711    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:42.896038    4888 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1126 19:37:42.896058    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:43.091838    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.124469735s)
	I1126 19:37:43.091874    4888 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-152801"
	I1126 19:37:43.092132    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1126 19:37:43.094988    4888 out.go:179] * Verifying csi-hostpath-driver addon...
	I1126 19:37:43.097689    4888 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1126 19:37:43.109090    4888 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1126 19:37:43.109112    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:43.140121    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:43.391898    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:43.394353    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:43.601047    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:43.891325    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:43.893646    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:44.100794    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:44.390792    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:44.393160    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:44.601762    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:44.891281    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:44.893298    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:44.910292    4888 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1126 19:37:44.910386    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:37:44.927371    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:45.064921    4888 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1126 19:37:45.103873    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:45.107421    4888 addons.go:239] Setting addon gcp-auth=true in "addons-152801"
	I1126 19:37:45.107550    4888 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:37:45.108131    4888 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:37:45.139302    4888 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1126 19:37:45.139359    4888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	W1126 19:37:45.159508    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:45.208953    4888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:37:45.391266    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:45.393437    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:45.602303    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:45.788116    4888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.695948181s)
	I1126 19:37:45.791446    4888 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1126 19:37:45.794597    4888 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1126 19:37:45.797277    4888 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1126 19:37:45.797303    4888 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1126 19:37:45.810109    4888 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1126 19:37:45.810132    4888 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1126 19:37:45.822759    4888 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1126 19:37:45.822791    4888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1126 19:37:45.836003    4888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1126 19:37:45.891436    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:45.894013    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:46.101505    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:46.345863    4888 addons.go:495] Verifying addon gcp-auth=true in "addons-152801"
	I1126 19:37:46.348908    4888 out.go:179] * Verifying gcp-auth addon...
	I1126 19:37:46.352611    4888 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1126 19:37:46.358610    4888 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1126 19:37:46.358677    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:46.391501    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:46.393679    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:46.600807    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:46.856258    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:46.891096    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:46.893177    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:47.100840    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:47.355622    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:47.391433    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:47.393751    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:47.600730    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:47.633186    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:47.856214    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:47.891046    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:47.893329    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:48.101082    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:48.355501    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:48.391302    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:48.393339    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:48.600534    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:48.855805    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:48.890839    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:48.893207    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:49.101343    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:49.356074    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:49.390726    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:49.394533    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:49.600540    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:49.633511    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:49.855362    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:49.891008    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:49.893498    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:50.101942    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:50.356235    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:50.391089    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:50.393541    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:50.600483    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:50.856258    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:50.891266    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:50.893788    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:51.100696    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:51.355809    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:51.390577    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:51.393864    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:51.601618    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:51.856237    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:51.891189    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:51.893546    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:52.100977    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:52.137038    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:52.355607    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:52.391392    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:52.393583    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:52.600693    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:52.856195    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:52.890744    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:52.894169    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:53.101051    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:53.355215    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:53.391110    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:53.393314    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:53.601343    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:53.855328    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:53.891099    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:53.893103    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:54.101710    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:54.356237    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:54.390856    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:54.393124    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:54.601047    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:54.633702    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:54.855341    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:54.891376    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:54.893946    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:55.101350    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:55.355907    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:55.390828    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:55.393299    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:55.601225    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:55.856055    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:55.890937    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:55.893433    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:56.101561    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:56.356061    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:56.390595    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:56.394322    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:56.601467    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:56.855791    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:56.891553    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:56.893750    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:57.100828    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:57.133468    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:57.355315    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:57.391233    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:57.393078    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:57.601069    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:57.856238    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:57.890759    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:57.892824    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:58.101385    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:58.356099    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:58.390669    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:58.394146    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:58.601349    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:58.856249    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:58.891047    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:58.893358    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:59.101077    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:37:59.133797    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:37:59.355631    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:59.391521    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:59.393335    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:37:59.601701    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:37:59.856124    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:37:59.890933    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:37:59.893359    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:00.116665    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:00.358907    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:00.392366    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:00.396851    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:00.600418    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:00.856133    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:00.890965    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:00.893096    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:01.100974    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:01.134336    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:01.356260    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:01.391093    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:01.393170    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:01.601571    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:01.856306    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:01.891157    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:01.893316    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:02.101571    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:02.357024    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:02.390672    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:02.394387    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:02.601221    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:02.855743    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:02.891247    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:02.893241    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:03.101339    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:03.355564    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:03.391328    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:03.393617    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:03.600669    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:03.633432    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:03.856317    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:03.891268    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:03.893640    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:04.100668    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:04.355480    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:04.391310    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:04.393609    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:04.600313    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:04.855837    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:04.890841    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:04.893374    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:05.101501    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:05.355341    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:05.391175    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:05.393309    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:05.601252    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:05.855966    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:05.890724    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:05.894160    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:06.101248    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:06.133328    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:06.356127    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:06.392049    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:06.394629    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:06.601330    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:06.855757    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:06.890626    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:06.894420    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:07.100203    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:07.355613    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:07.391384    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:07.393864    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:07.600759    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:07.856104    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:07.891213    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:07.893771    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:08.100589    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:08.133375    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:08.356131    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:08.390878    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:08.393190    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:08.601190    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:08.855623    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:08.892835    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:08.894057    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:09.101036    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:09.355761    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:09.391385    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:09.393319    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:09.601393    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:09.856385    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:09.891031    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:09.893418    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:10.100944    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:10.133773    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:10.355801    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:10.391408    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:10.397784    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:10.600391    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:10.855909    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:10.890941    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:10.893059    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:11.100992    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:11.356453    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:11.391152    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:11.393914    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:11.600881    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:11.856417    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:11.891419    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:11.893304    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:12.101701    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:12.356298    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:12.391156    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:12.393184    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:12.601324    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:12.633306    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:12.856303    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:12.891100    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:12.893400    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:13.101433    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:13.356097    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:13.390661    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:13.394203    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:13.600302    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:13.855673    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:13.891393    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:13.893553    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:14.101116    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:14.355569    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:14.391234    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:14.393067    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:14.600924    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:14.633639    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:14.855471    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:14.891437    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:14.893611    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:15.100399    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:15.356272    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:15.390967    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:15.392949    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:15.600817    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:15.855412    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:15.891104    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:15.893515    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:16.100662    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:16.355812    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:16.391413    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:16.393328    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:16.601246    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1126 19:38:16.633801    4888 node_ready.go:57] node "addons-152801" has "Ready":"False" status (will retry)
	I1126 19:38:16.855411    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:16.891256    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:16.893612    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:17.101444    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:17.356181    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:17.390947    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:17.393377    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:17.601269    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:17.866718    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:17.895100    4888 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1126 19:38:17.895124    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:17.896482    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:18.190559    4888 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1126 19:38:18.190579    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:18.206063    4888 node_ready.go:49] node "addons-152801" is "Ready"
	I1126 19:38:18.206094    4888 node_ready.go:38] duration metric: took 39.575876315s for node "addons-152801" to be "Ready" ...
	I1126 19:38:18.206107    4888 api_server.go:52] waiting for apiserver process to appear ...
	I1126 19:38:18.206165    4888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:38:18.234516    4888 api_server.go:72] duration metric: took 41.344489873s to wait for apiserver process to appear ...
	I1126 19:38:18.234542    4888 api_server.go:88] waiting for apiserver healthz status ...
	I1126 19:38:18.234560    4888 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1126 19:38:18.255587    4888 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1126 19:38:18.263045    4888 api_server.go:141] control plane version: v1.34.1
	I1126 19:38:18.263078    4888 api_server.go:131] duration metric: took 28.528343ms to wait for apiserver health ...
	I1126 19:38:18.263088    4888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 19:38:18.275914    4888 system_pods.go:59] 19 kube-system pods found
	I1126 19:38:18.275954    4888 system_pods.go:61] "coredns-66bc5c9577-qvl2j" [9a754e8d-4928-4fe6-bbec-70cd718917a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:38:18.275961    4888 system_pods.go:61] "csi-hostpath-attacher-0" [ac1eb361-e9a5-46e2-aeba-7fd26ad0e2bd] Pending
	I1126 19:38:18.275967    4888 system_pods.go:61] "csi-hostpath-resizer-0" [1f8b64ed-95d4-474c-b903-60b6c40d6fc0] Pending
	I1126 19:38:18.275975    4888 system_pods.go:61] "csi-hostpathplugin-bshhs" [6c2e8d62-8ef5-4353-8976-9aa7c3e0f667] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:38:18.275984    4888 system_pods.go:61] "etcd-addons-152801" [18fbdd46-010b-4707-85b2-c468ca37ee6c] Running
	I1126 19:38:18.275989    4888 system_pods.go:61] "kindnet-ktxmd" [3e962ef8-76b0-4926-8cfe-671cd851c299] Running
	I1126 19:38:18.275998    4888 system_pods.go:61] "kube-apiserver-addons-152801" [61829c4e-f463-4940-9286-74b1f325de9d] Running
	I1126 19:38:18.276003    4888 system_pods.go:61] "kube-controller-manager-addons-152801" [71a44491-0938-4f2e-8895-a2c85e1c1c56] Running
	I1126 19:38:18.276010    4888 system_pods.go:61] "kube-ingress-dns-minikube" [1c3c1c68-369f-46ff-9770-a948533ddb27] Pending
	I1126 19:38:18.276017    4888 system_pods.go:61] "kube-proxy-7gdlf" [6e73b61c-4615-4c17-af0c-68ce10097f82] Running
	I1126 19:38:18.276021    4888 system_pods.go:61] "kube-scheduler-addons-152801" [9704324b-4662-41c0-ac6d-1673805bc0f0] Running
	I1126 19:38:18.276029    4888 system_pods.go:61] "metrics-server-85b7d694d7-tjllr" [13565e4b-5a4b-448e-b984-dc03582b70dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:38:18.276035    4888 system_pods.go:61] "nvidia-device-plugin-daemonset-rrntc" [658d2994-5e58-41f4-b7ef-fbca089ee861] Pending
	I1126 19:38:18.276039    4888 system_pods.go:61] "registry-6b586f9694-scxrq" [bc7f6a37-ea49-4566-bd97-21f1047456d7] Pending
	I1126 19:38:18.276046    4888 system_pods.go:61] "registry-creds-764b6fb674-hcfnw" [41effe6d-c599-4e98-96a5-69d9638038ac] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:38:18.276064    4888 system_pods.go:61] "registry-proxy-sdxpt" [bf573c71-ee84-46f1-b932-717861ec5583] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:38:18.276071    4888 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gphz4" [5e3110a5-4385-46b3-9aed-c258ebfe891d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:38:18.276079    4888 system_pods.go:61] "snapshot-controller-7d9fbc56b8-whphz" [5f669982-2853-4426-a238-6566bc04539b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:38:18.276085    4888 system_pods.go:61] "storage-provisioner" [6f084c96-db5e-4615-85a4-046b50712af8] Pending
	I1126 19:38:18.276091    4888 system_pods.go:74] duration metric: took 12.998021ms to wait for pod list to return data ...
	I1126 19:38:18.276099    4888 default_sa.go:34] waiting for default service account to be created ...
	I1126 19:38:18.283748    4888 default_sa.go:45] found service account: "default"
	I1126 19:38:18.283774    4888 default_sa.go:55] duration metric: took 7.669435ms for default service account to be created ...
	I1126 19:38:18.283784    4888 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 19:38:18.287307    4888 system_pods.go:86] 19 kube-system pods found
	I1126 19:38:18.287336    4888 system_pods.go:89] "coredns-66bc5c9577-qvl2j" [9a754e8d-4928-4fe6-bbec-70cd718917a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:38:18.287342    4888 system_pods.go:89] "csi-hostpath-attacher-0" [ac1eb361-e9a5-46e2-aeba-7fd26ad0e2bd] Pending
	I1126 19:38:18.287348    4888 system_pods.go:89] "csi-hostpath-resizer-0" [1f8b64ed-95d4-474c-b903-60b6c40d6fc0] Pending
	I1126 19:38:18.287355    4888 system_pods.go:89] "csi-hostpathplugin-bshhs" [6c2e8d62-8ef5-4353-8976-9aa7c3e0f667] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:38:18.287359    4888 system_pods.go:89] "etcd-addons-152801" [18fbdd46-010b-4707-85b2-c468ca37ee6c] Running
	I1126 19:38:18.287364    4888 system_pods.go:89] "kindnet-ktxmd" [3e962ef8-76b0-4926-8cfe-671cd851c299] Running
	I1126 19:38:18.287374    4888 system_pods.go:89] "kube-apiserver-addons-152801" [61829c4e-f463-4940-9286-74b1f325de9d] Running
	I1126 19:38:18.287378    4888 system_pods.go:89] "kube-controller-manager-addons-152801" [71a44491-0938-4f2e-8895-a2c85e1c1c56] Running
	I1126 19:38:18.287387    4888 system_pods.go:89] "kube-ingress-dns-minikube" [1c3c1c68-369f-46ff-9770-a948533ddb27] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:38:18.287391    4888 system_pods.go:89] "kube-proxy-7gdlf" [6e73b61c-4615-4c17-af0c-68ce10097f82] Running
	I1126 19:38:18.287396    4888 system_pods.go:89] "kube-scheduler-addons-152801" [9704324b-4662-41c0-ac6d-1673805bc0f0] Running
	I1126 19:38:18.287402    4888 system_pods.go:89] "metrics-server-85b7d694d7-tjllr" [13565e4b-5a4b-448e-b984-dc03582b70dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:38:18.287415    4888 system_pods.go:89] "nvidia-device-plugin-daemonset-rrntc" [658d2994-5e58-41f4-b7ef-fbca089ee861] Pending
	I1126 19:38:18.287419    4888 system_pods.go:89] "registry-6b586f9694-scxrq" [bc7f6a37-ea49-4566-bd97-21f1047456d7] Pending
	I1126 19:38:18.287426    4888 system_pods.go:89] "registry-creds-764b6fb674-hcfnw" [41effe6d-c599-4e98-96a5-69d9638038ac] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:38:18.287436    4888 system_pods.go:89] "registry-proxy-sdxpt" [bf573c71-ee84-46f1-b932-717861ec5583] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:38:18.287444    4888 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gphz4" [5e3110a5-4385-46b3-9aed-c258ebfe891d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:38:18.287450    4888 system_pods.go:89] "snapshot-controller-7d9fbc56b8-whphz" [5f669982-2853-4426-a238-6566bc04539b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:38:18.287455    4888 system_pods.go:89] "storage-provisioner" [6f084c96-db5e-4615-85a4-046b50712af8] Pending
	I1126 19:38:18.287471    4888 retry.go:31] will retry after 215.290936ms: missing components: kube-dns
	I1126 19:38:18.361140    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:18.391330    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:18.393912    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:18.523237    4888 system_pods.go:86] 19 kube-system pods found
	I1126 19:38:18.523275    4888 system_pods.go:89] "coredns-66bc5c9577-qvl2j" [9a754e8d-4928-4fe6-bbec-70cd718917a6] Running
	I1126 19:38:18.523287    4888 system_pods.go:89] "csi-hostpath-attacher-0" [ac1eb361-e9a5-46e2-aeba-7fd26ad0e2bd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1126 19:38:18.523295    4888 system_pods.go:89] "csi-hostpath-resizer-0" [1f8b64ed-95d4-474c-b903-60b6c40d6fc0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1126 19:38:18.523303    4888 system_pods.go:89] "csi-hostpathplugin-bshhs" [6c2e8d62-8ef5-4353-8976-9aa7c3e0f667] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1126 19:38:18.523308    4888 system_pods.go:89] "etcd-addons-152801" [18fbdd46-010b-4707-85b2-c468ca37ee6c] Running
	I1126 19:38:18.523313    4888 system_pods.go:89] "kindnet-ktxmd" [3e962ef8-76b0-4926-8cfe-671cd851c299] Running
	I1126 19:38:18.523322    4888 system_pods.go:89] "kube-apiserver-addons-152801" [61829c4e-f463-4940-9286-74b1f325de9d] Running
	I1126 19:38:18.523326    4888 system_pods.go:89] "kube-controller-manager-addons-152801" [71a44491-0938-4f2e-8895-a2c85e1c1c56] Running
	I1126 19:38:18.523336    4888 system_pods.go:89] "kube-ingress-dns-minikube" [1c3c1c68-369f-46ff-9770-a948533ddb27] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1126 19:38:18.523340    4888 system_pods.go:89] "kube-proxy-7gdlf" [6e73b61c-4615-4c17-af0c-68ce10097f82] Running
	I1126 19:38:18.523352    4888 system_pods.go:89] "kube-scheduler-addons-152801" [9704324b-4662-41c0-ac6d-1673805bc0f0] Running
	I1126 19:38:18.523358    4888 system_pods.go:89] "metrics-server-85b7d694d7-tjllr" [13565e4b-5a4b-448e-b984-dc03582b70dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1126 19:38:18.523364    4888 system_pods.go:89] "nvidia-device-plugin-daemonset-rrntc" [658d2994-5e58-41f4-b7ef-fbca089ee861] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1126 19:38:18.523375    4888 system_pods.go:89] "registry-6b586f9694-scxrq" [bc7f6a37-ea49-4566-bd97-21f1047456d7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1126 19:38:18.523382    4888 system_pods.go:89] "registry-creds-764b6fb674-hcfnw" [41effe6d-c599-4e98-96a5-69d9638038ac] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1126 19:38:18.523395    4888 system_pods.go:89] "registry-proxy-sdxpt" [bf573c71-ee84-46f1-b932-717861ec5583] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1126 19:38:18.523401    4888 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gphz4" [5e3110a5-4385-46b3-9aed-c258ebfe891d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:38:18.523408    4888 system_pods.go:89] "snapshot-controller-7d9fbc56b8-whphz" [5f669982-2853-4426-a238-6566bc04539b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1126 19:38:18.523416    4888 system_pods.go:89] "storage-provisioner" [6f084c96-db5e-4615-85a4-046b50712af8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:38:18.523424    4888 system_pods.go:126] duration metric: took 239.634166ms to wait for k8s-apps to be running ...
	I1126 19:38:18.523436    4888 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 19:38:18.523493    4888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 19:38:18.542176    4888 system_svc.go:56] duration metric: took 18.730951ms WaitForService to wait for kubelet
	I1126 19:38:18.542206    4888 kubeadm.go:587] duration metric: took 41.652183595s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 19:38:18.542224    4888 node_conditions.go:102] verifying NodePressure condition ...
	I1126 19:38:18.547730    4888 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 19:38:18.547764    4888 node_conditions.go:123] node cpu capacity is 2
	I1126 19:38:18.547777    4888 node_conditions.go:105] duration metric: took 5.548148ms to run NodePressure ...
	I1126 19:38:18.547791    4888 start.go:242] waiting for startup goroutines ...
	I1126 19:38:18.610513    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:18.857137    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:18.958953    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:18.959419    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:19.102409    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:19.356417    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:19.391508    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:19.394380    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:19.621298    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:19.856606    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:19.891908    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:19.894306    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:20.102393    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:20.359934    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:20.464481    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:20.465091    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:20.603000    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:20.856553    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:20.891908    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:20.895179    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:21.103023    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:21.356010    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:21.391289    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:21.394157    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:21.602061    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:21.856628    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:21.891631    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:21.893721    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:22.102707    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:22.358517    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:22.402377    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:22.459371    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:22.609871    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:22.859369    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:22.892711    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:22.896772    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:23.112512    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:23.356470    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:23.395097    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:23.395187    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:23.601365    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:23.856337    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:23.899701    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:23.901707    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:24.101953    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:24.356023    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:24.392619    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:24.397292    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:24.608557    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:24.862062    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:24.893335    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:24.897604    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:25.106664    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:25.360718    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:25.397358    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:25.397597    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:25.601952    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:25.860032    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:25.890891    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:25.901584    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:26.101083    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:26.355991    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:26.391043    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:26.394608    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:26.605820    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:26.855458    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:26.891208    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:26.893866    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:27.101278    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:27.356152    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:27.391398    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:27.394432    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:27.601563    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:27.855840    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:27.891782    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:27.894553    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:28.101905    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:28.357477    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:28.393403    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:28.394978    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:28.602134    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:28.856373    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:28.892361    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:28.894182    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:29.102362    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:29.356727    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:29.390646    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:29.394616    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:29.601084    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:29.856349    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:29.893040    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:29.895510    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:30.102633    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:30.356354    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:30.392223    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:30.395230    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:30.602239    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:30.856574    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:30.892716    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:30.895167    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:31.102448    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:31.356090    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:31.392249    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:31.393758    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:31.601540    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:31.855634    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:31.891832    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:31.894032    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:32.102400    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:32.356468    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:32.391837    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:32.394877    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:32.602638    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:32.857023    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:32.892366    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:32.895314    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:33.103195    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:33.357363    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:33.391805    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:33.394553    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:33.601676    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:33.856074    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:33.890819    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:33.893480    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:34.102293    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:34.356657    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:34.390680    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:34.394477    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:34.603172    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:34.856690    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:34.890872    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:34.893369    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:35.101744    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:35.356283    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:35.391695    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:35.394486    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:35.602006    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:35.856073    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:35.891680    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:35.894393    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:36.102337    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:36.356380    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:36.401198    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:36.402740    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:36.601189    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:36.857114    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:36.891413    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:36.900533    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:37.101317    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:37.356364    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:37.458267    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:37.458549    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:37.601608    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:37.855899    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:37.890938    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:37.893426    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:38.102379    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:38.355487    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:38.392442    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:38.394166    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:38.601463    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:38.855859    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:38.890515    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:38.894124    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:39.101582    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:39.356802    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:39.390849    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:39.393811    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:39.601086    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:39.856725    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:39.890927    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:39.893727    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:40.101787    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:40.356601    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:40.391484    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:40.393608    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:40.601733    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:40.855259    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:40.891385    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:40.893953    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:41.101308    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:41.356422    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:41.391704    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:41.394288    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:41.602593    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:41.856582    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:41.891701    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:41.893974    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:42.102693    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:42.355578    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:42.391758    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:42.394343    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:42.601907    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:42.856294    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:42.892334    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:42.893856    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:43.101382    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:43.356769    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:43.391496    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:43.394044    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:43.601545    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:43.855318    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:43.891110    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:43.894158    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:44.101880    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:44.356008    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:44.390950    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:44.393327    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:44.607813    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:44.856170    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:44.891167    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:44.893734    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:45.109338    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:45.356189    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:45.393299    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:45.394280    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:45.601497    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:45.855565    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:45.891952    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:45.894777    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:46.108628    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:46.355653    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:46.393541    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:46.395601    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:46.603395    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:46.855770    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:46.891871    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:46.894396    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:47.101507    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:47.356135    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:47.391116    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:47.393830    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:47.600576    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:47.858473    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:47.894552    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:47.896759    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:48.101599    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:48.356054    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:48.391645    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:48.394483    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:48.601541    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:48.855641    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:48.892170    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:48.895568    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:49.101957    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:49.355534    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:49.392669    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:49.393837    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:49.601190    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:49.856262    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:49.891074    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:49.893385    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:50.106786    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:50.356360    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:50.391189    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:50.393396    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:50.601282    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:50.856733    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:50.890820    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:50.893354    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:51.101392    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:51.355653    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:51.392202    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:51.394851    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:51.600766    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:51.855871    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:51.890904    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:51.893579    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:52.100908    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:52.355558    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:52.391573    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:52.393913    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:52.601418    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:52.856409    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:52.891461    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:52.899010    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:53.101790    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:53.356271    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:53.391151    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:53.393485    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:53.602122    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:53.856581    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:53.891595    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:53.893735    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:54.102730    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:54.357146    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:54.393790    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:54.395790    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:54.601973    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:54.855951    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:54.893585    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:54.900984    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:55.102239    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:55.363073    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:55.391485    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:55.394887    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:55.601126    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:55.856530    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:55.891443    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:55.893790    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:56.101205    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:56.356039    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:56.391029    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:56.393322    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:56.601779    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:56.856643    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:56.891378    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:56.893662    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:57.101677    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:57.356063    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:57.391109    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:57.393475    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:57.600653    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:57.856192    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:57.893278    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:57.895359    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:58.101422    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:58.356663    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:58.391845    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:58.394431    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:58.601400    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:58.855892    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:58.890840    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:58.893438    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:59.101686    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:59.356249    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:59.391041    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1126 19:38:59.393579    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:38:59.601430    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:38:59.856580    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:38:59.891578    4888 kapi.go:107] duration metric: took 1m17.003822008s to wait for kubernetes.io/minikube-addons=registry ...
	I1126 19:38:59.893843    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:00.125911    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:00.364885    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:00.417515    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:00.602321    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:00.856294    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:00.894947    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:01.101438    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:01.357391    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:01.395072    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:01.603336    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:01.858486    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:01.894644    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:02.101770    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:02.356840    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:02.393887    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:02.601519    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:02.855624    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:02.894536    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:03.101050    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:03.356469    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:03.394954    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:03.601335    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:03.856485    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:03.900398    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:04.101381    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:04.356630    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:04.395373    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:04.603551    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:04.856275    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:04.894335    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:05.101913    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:05.356158    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:05.394353    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:05.601517    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:05.855636    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:05.894894    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:06.107460    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:06.356942    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:06.394107    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:06.601838    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:06.855870    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:06.894003    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:07.102081    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:07.356896    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:07.394463    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:07.600977    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:07.856625    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:07.894926    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:08.101771    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:08.355943    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:08.394577    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:08.603457    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:08.855523    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:08.896139    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:09.101704    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:09.356042    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:09.394604    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:09.601445    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:09.855421    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:09.894743    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:10.101423    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:10.356384    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:10.394190    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:10.601086    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:10.855543    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:10.894981    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:11.107640    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:11.357954    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:11.459421    4888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1126 19:39:11.602617    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:11.875098    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:11.900425    4888 kapi.go:107] duration metric: took 1m29.009658765s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1126 19:39:12.103094    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:12.356232    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:12.601891    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:12.856002    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:13.101877    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:13.356267    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:13.602027    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:13.856966    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:14.103467    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:14.355834    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:14.603125    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:14.856106    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:15.102408    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:15.355734    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:15.601789    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:15.856010    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:16.101970    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:16.356643    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:16.601427    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:16.855472    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:17.101456    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:17.355959    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:17.601917    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:17.856552    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:18.100635    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:18.356080    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:18.601879    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:18.856067    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:19.103446    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:19.356773    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:19.602760    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:19.856538    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1126 19:39:20.101618    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:20.356284    4888 kapi.go:107] duration metric: took 1m34.003669321s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1126 19:39:20.360210    4888 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-152801 cluster.
	I1126 19:39:20.363359    4888 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1126 19:39:20.366527    4888 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1126 19:39:20.602154    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:21.101506    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:21.601486    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:22.112263    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:22.602279    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:23.101052    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:23.601795    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:24.104906    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:24.604316    4888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1126 19:39:25.102092    4888 kapi.go:107] duration metric: took 1m42.004399157s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1126 19:39:25.108920    4888 out.go:179] * Enabled addons: storage-provisioner, cloud-spanner, registry-creds, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, default-storageclass, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1126 19:39:25.112490    4888 addons.go:530] duration metric: took 1m48.222169314s for enable addons: enabled=[storage-provisioner cloud-spanner registry-creds ingress-dns nvidia-device-plugin amd-gpu-device-plugin default-storageclass inspektor-gadget metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1126 19:39:25.112548    4888 start.go:247] waiting for cluster config update ...
	I1126 19:39:25.112571    4888 start.go:256] writing updated cluster config ...
	I1126 19:39:25.112905    4888 ssh_runner.go:195] Run: rm -f paused
	I1126 19:39:25.117659    4888 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 19:39:25.121134    4888 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qvl2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.126321    4888 pod_ready.go:94] pod "coredns-66bc5c9577-qvl2j" is "Ready"
	I1126 19:39:25.126349    4888 pod_ready.go:86] duration metric: took 5.188236ms for pod "coredns-66bc5c9577-qvl2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.129799    4888 pod_ready.go:83] waiting for pod "etcd-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.135015    4888 pod_ready.go:94] pod "etcd-addons-152801" is "Ready"
	I1126 19:39:25.135041    4888 pod_ready.go:86] duration metric: took 5.215353ms for pod "etcd-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.137367    4888 pod_ready.go:83] waiting for pod "kube-apiserver-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.142495    4888 pod_ready.go:94] pod "kube-apiserver-addons-152801" is "Ready"
	I1126 19:39:25.142522    4888 pod_ready.go:86] duration metric: took 5.131588ms for pod "kube-apiserver-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.145395    4888 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.521829    4888 pod_ready.go:94] pod "kube-controller-manager-addons-152801" is "Ready"
	I1126 19:39:25.521862    4888 pod_ready.go:86] duration metric: took 376.439693ms for pod "kube-controller-manager-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:25.722370    4888 pod_ready.go:83] waiting for pod "kube-proxy-7gdlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:26.121828    4888 pod_ready.go:94] pod "kube-proxy-7gdlf" is "Ready"
	I1126 19:39:26.121857    4888 pod_ready.go:86] duration metric: took 399.458833ms for pod "kube-proxy-7gdlf" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:26.322401    4888 pod_ready.go:83] waiting for pod "kube-scheduler-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:26.722030    4888 pod_ready.go:94] pod "kube-scheduler-addons-152801" is "Ready"
	I1126 19:39:26.722059    4888 pod_ready.go:86] duration metric: took 399.634637ms for pod "kube-scheduler-addons-152801" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:39:26.722072    4888 pod_ready.go:40] duration metric: took 1.60437999s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 19:39:26.781521    4888 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 19:39:26.784695    4888 out.go:179] * Done! kubectl is now configured to use "addons-152801" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 19:39:56 addons-152801 crio[833]: time="2025-11-26T19:39:56.889379487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:39:56 addons-152801 crio[833]: time="2025-11-26T19:39:56.890103189Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:39:56 addons-152801 crio[833]: time="2025-11-26T19:39:56.908154325Z" level=info msg="Created container c97f7d7c91794be177d78745891dead89f3cf4d6d00dd0d7f5c07789dacceb1f: default/test-local-path/busybox" id=5ad372fb-84bd-42d9-9741-6cd73d73be4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 19:39:56 addons-152801 crio[833]: time="2025-11-26T19:39:56.911584704Z" level=info msg="Starting container: c97f7d7c91794be177d78745891dead89f3cf4d6d00dd0d7f5c07789dacceb1f" id=d894781c-ccbd-4805-94a3-34cce0e018a6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 19:39:56 addons-152801 crio[833]: time="2025-11-26T19:39:56.915238472Z" level=info msg="Started container" PID=5287 containerID=c97f7d7c91794be177d78745891dead89f3cf4d6d00dd0d7f5c07789dacceb1f description=default/test-local-path/busybox id=d894781c-ccbd-4805-94a3-34cce0e018a6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=07afe31cdbb0ed9c2dac864add2548675510beb725153fa8b9603581943b8745
	Nov 26 19:39:58 addons-152801 crio[833]: time="2025-11-26T19:39:58.07420857Z" level=info msg="Stopping pod sandbox: 07afe31cdbb0ed9c2dac864add2548675510beb725153fa8b9603581943b8745" id=9dbddd41-8705-40a0-911b-4f30999518dd name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 26 19:39:58 addons-152801 crio[833]: time="2025-11-26T19:39:58.074481648Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:07afe31cdbb0ed9c2dac864add2548675510beb725153fa8b9603581943b8745 UID:46f1716c-6f12-4a3e-892e-d4ce0ff42f12 NetNS:/var/run/netns/6e3ca7a2-9ebd-43eb-9c00-b5d9784c11aa Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002246048}] Aliases:map[]}"
	Nov 26 19:39:58 addons-152801 crio[833]: time="2025-11-26T19:39:58.074664249Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Nov 26 19:39:58 addons-152801 crio[833]: time="2025-11-26T19:39:58.106376102Z" level=info msg="Stopped pod sandbox: 07afe31cdbb0ed9c2dac864add2548675510beb725153fa8b9603581943b8745" id=9dbddd41-8705-40a0-911b-4f30999518dd name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.10980768Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087/POD" id=18321b64-d147-4e5b-844b-8b70c5a3613d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.109897434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.131732854Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087 Namespace:local-path-storage ID:5c03c3ffda7e0135b8de1a66c72bc010e14288c2854751bf093144148124c739 UID:95fa449c-899e-4d3a-9821-8e37609debbd NetNS:/var/run/netns/9649f5a4-96bd-4439-b582-281189f5d88d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001e22620}] Aliases:map[]}"
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.131781189Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087 to CNI network \"kindnet\" (type=ptp)"
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.150040352Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087 Namespace:local-path-storage ID:5c03c3ffda7e0135b8de1a66c72bc010e14288c2854751bf093144148124c739 UID:95fa449c-899e-4d3a-9821-8e37609debbd NetNS:/var/run/netns/9649f5a4-96bd-4439-b582-281189f5d88d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001e22620}] Aliases:map[]}"
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.150236205Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087 for CNI network kindnet (type=ptp)"
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.160529177Z" level=info msg="Ran pod sandbox 5c03c3ffda7e0135b8de1a66c72bc010e14288c2854751bf093144148124c739 with infra container: local-path-storage/helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087/POD" id=18321b64-d147-4e5b-844b-8b70c5a3613d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.163268962Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=b30db1bb-79de-4566-9846-87bfe910dd84 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.175327191Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=074c7649-6161-40e1-8e93-65be0f6466ba name=/runtime.v1.ImageService/ImageStatus
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.195861295Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087/helper-pod" id=b9b23acf-f9cd-4eee-8a6f-59b5a6f4b163 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.195979061Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.208145981Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.208766169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.330476254Z" level=info msg="Created container e924a4552f66c4a8d7de3a1ecdb3b431d0f5524974099b16d974f141c8a02922: local-path-storage/helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087/helper-pod" id=b9b23acf-f9cd-4eee-8a6f-59b5a6f4b163 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.342412031Z" level=info msg="Starting container: e924a4552f66c4a8d7de3a1ecdb3b431d0f5524974099b16d974f141c8a02922" id=cf046216-87fc-45be-9119-5695e3b9b994 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 19:40:00 addons-152801 crio[833]: time="2025-11-26T19:40:00.386899897Z" level=info msg="Started container" PID=5430 containerID=e924a4552f66c4a8d7de3a1ecdb3b431d0f5524974099b16d974f141c8a02922 description=local-path-storage/helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087/helper-pod id=cf046216-87fc-45be-9119-5695e3b9b994 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c03c3ffda7e0135b8de1a66c72bc010e14288c2854751bf093144148124c739
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	e924a4552f66c       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   5c03c3ffda7e0       helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087   local-path-storage
	c97f7d7c91794       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   07afe31cdbb0e       test-local-path                                              default
	f0c0c158d3579       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   fc9cf14a33395       helper-pod-create-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087   local-path-storage
	7f84785e22d16       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          9 seconds ago        Exited              registry-test                            0                   420320cca7d33       registry-test                                                default
	fce906cf12d01       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          30 seconds ago       Running             busybox                                  0                   18103862e9352       busybox                                                      default
	5cdc59e655381       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          36 seconds ago       Running             csi-snapshotter                          0                   e362139e28f18       csi-hostpathplugin-bshhs                                     kube-system
	0d2525ad7c6f9       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          38 seconds ago       Running             csi-provisioner                          0                   e362139e28f18       csi-hostpathplugin-bshhs                                     kube-system
	68f9098f874c1       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            40 seconds ago       Running             liveness-probe                           0                   e362139e28f18       csi-hostpathplugin-bshhs                                     kube-system
	c7b9d11300784       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           40 seconds ago       Running             hostpath                                 0                   e362139e28f18       csi-hostpathplugin-bshhs                                     kube-system
	c40c5e8f24aca       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 42 seconds ago       Running             gcp-auth                                 0                   55d6b44dddf48       gcp-auth-78565c9fb4-fks2w                                    gcp-auth
	5f7e0a69f6079       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            45 seconds ago       Running             gadget                                   0                   cd509f2dd6065       gadget-vnrsj                                                 gadget
	a4e36f02d445a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                48 seconds ago       Running             node-driver-registrar                    0                   e362139e28f18       csi-hostpathplugin-bshhs                                     kube-system
	7f1a0ce591f6c       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             49 seconds ago       Running             controller                               0                   75ca2c5bd84b3       ingress-nginx-controller-6c8bf45fb-j7qhq                     ingress-nginx
	333ebda1f94e9       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              56 seconds ago       Running             csi-resizer                              0                   5bb0c2a6662cb       csi-hostpath-resizer-0                                       kube-system
	e4aba6b77535f       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               57 seconds ago       Running             cloud-spanner-emulator                   0                   7f8baf59ccf19       cloud-spanner-emulator-5bdddb765-chzvk                       default
	cb793f072e63d       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             57 seconds ago       Exited              patch                                    3                   1a6f3578af7f1       gcp-auth-certs-patch-w2xsr                                   gcp-auth
	be6e4f7ecbd7c       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   35ca34017c282       registry-proxy-sdxpt                                         kube-system
	357f60871c591       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   85311bf5645a2       snapshot-controller-7d9fbc56b8-whphz                         kube-system
	6b2cce003afc3       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   4110db22e84b3       yakd-dashboard-5ff678cb9-4wcfn                               yakd-dashboard
	bbda721ec7889       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   02502b7824730       csi-hostpath-attacher-0                                      kube-system
	5aa817b9fa068       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   69f1d5b9dd084       nvidia-device-plugin-daemonset-rrntc                         kube-system
	d4b8bdfa752c6       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   0aa8ff336d827       local-path-provisioner-648f6765c9-gqgw2                      local-path-storage
	33e2dbaa04cd8       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   e362139e28f18       csi-hostpathplugin-bshhs                                     kube-system
	2aecd6362c5e2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              patch                                    0                   a93d3e36814a3       ingress-nginx-admission-patch-xlj8c                          ingress-nginx
	67ccc4b888832       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   98bfe50df195c       registry-6b586f9694-scxrq                                    kube-system
	e3af750d29e79       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   50779a521dd2f       snapshot-controller-7d9fbc56b8-gphz4                         kube-system
	3cd75fe86fc63       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   39a1fa7f62fba       kube-ingress-dns-minikube                                    kube-system
	3435418167dd8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   3a1662169e2ef       ingress-nginx-admission-create-g8z27                         ingress-nginx
	f900f636f3c4d       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   59bd798bb4e2a       metrics-server-85b7d694d7-tjllr                              kube-system
	d0021ecd91f06       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   0b3bbfb2c610d       storage-provisioner                                          kube-system
	2c15569036061       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   edd4e41773c54       coredns-66bc5c9577-qvl2j                                     kube-system
	4cfa09096b086       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   a20cd8059aa58       kindnet-ktxmd                                                kube-system
	4f25a6570f326       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   54a0245e9f072       kube-proxy-7gdlf                                             kube-system
	4365cc22027bb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   70cd354bee38a       etcd-addons-152801                                           kube-system
	b21aa95449406       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   30f1a0eae29f4       kube-apiserver-addons-152801                                 kube-system
	899c0cef3d3c5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   9b6295b6b2ce1       kube-scheduler-addons-152801                                 kube-system
	6bd6a4e5eae30       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   be7aadee1bb4b       kube-controller-manager-addons-152801                        kube-system
	
	
	==> coredns [2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32] <==
	[INFO] 10.244.0.8:33810 - 42686 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002436694s
	[INFO] 10.244.0.8:33810 - 23777 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00011431s
	[INFO] 10.244.0.8:33810 - 9159 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00016473s
	[INFO] 10.244.0.8:51142 - 8440 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000169522s
	[INFO] 10.244.0.8:51142 - 8169 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000071579s
	[INFO] 10.244.0.8:56005 - 23420 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000916s
	[INFO] 10.244.0.8:56005 - 23174 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065015s
	[INFO] 10.244.0.8:34846 - 48615 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088868s
	[INFO] 10.244.0.8:34846 - 48170 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000166453s
	[INFO] 10.244.0.8:39118 - 4145 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001441218s
	[INFO] 10.244.0.8:39118 - 4565 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001594986s
	[INFO] 10.244.0.8:43309 - 61125 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00013433s
	[INFO] 10.244.0.8:43309 - 60722 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000084962s
	[INFO] 10.244.0.21:36407 - 18148 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000179565s
	[INFO] 10.244.0.21:51442 - 63332 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000122343s
	[INFO] 10.244.0.21:56301 - 48694 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091879s
	[INFO] 10.244.0.21:46758 - 35467 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096695s
	[INFO] 10.244.0.21:41552 - 55988 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099402s
	[INFO] 10.244.0.21:41942 - 34650 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000081385s
	[INFO] 10.244.0.21:44469 - 27731 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002628408s
	[INFO] 10.244.0.21:58678 - 23954 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002422497s
	[INFO] 10.244.0.21:42423 - 13049 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002177219s
	[INFO] 10.244.0.21:49599 - 53468 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002421029s
	[INFO] 10.244.0.23:39879 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000254887s
	[INFO] 10.244.0.23:44129 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000136522s
	
	
	==> describe nodes <==
	Name:               addons-152801
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-152801
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=addons-152801
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T19_37_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-152801
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-152801"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:37:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-152801
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 19:39:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 19:39:33 +0000   Wed, 26 Nov 2025 19:37:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 19:39:33 +0000   Wed, 26 Nov 2025 19:37:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 19:39:33 +0000   Wed, 26 Nov 2025 19:37:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 19:39:33 +0000   Wed, 26 Nov 2025 19:38:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-152801
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                bca91ee9-088f-4b6e-9b97-43c6020effa7
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     cloud-spanner-emulator-5bdddb765-chzvk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-vnrsj                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  gcp-auth                    gcp-auth-78565c9fb4-fks2w                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-j7qhq                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m19s
	  kube-system                 coredns-66bc5c9577-qvl2j                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 csi-hostpathplugin-bshhs                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 etcd-addons-152801                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m31s
	  kube-system                 kindnet-ktxmd                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-addons-152801                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-addons-152801                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-7gdlf                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-addons-152801                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 metrics-server-85b7d694d7-tjllr                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m20s
	  kube-system                 nvidia-device-plugin-daemonset-rrntc                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 registry-6b586f9694-scxrq                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 registry-creds-764b6fb674-hcfnw                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 registry-proxy-sdxpt                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 snapshot-controller-7d9fbc56b8-gphz4                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 snapshot-controller-7d9fbc56b8-whphz                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  local-path-storage          helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-648f6765c9-gqgw2                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-4wcfn                                0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 2m38s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m38s)  kubelet          Node addons-152801 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m38s)  kubelet          Node addons-152801 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m38s)  kubelet          Node addons-152801 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node addons-152801 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node addons-152801 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node addons-152801 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m26s                  node-controller  Node addons-152801 event: Registered Node addons-152801 in Controller
	  Normal   NodeReady                104s                   kubelet          Node addons-152801 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov26 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014220] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507172] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032749] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.773464] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.697672] kauditd_printk_skb: 36 callbacks suppressed
	[Nov26 19:37] overlayfs: idmapped layers are currently not supported
	[  +0.074077] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov26 19:39] hrtimer: interrupt took 16123050 ns
	
	
	==> etcd [4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72] <==
	{"level":"warn","ts":"2025-11-26T19:37:27.394778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.422081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.425511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.442129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.458914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.477888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.493332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.518706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.538997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.550502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.567327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.584398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.602487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.618700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.640121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.660885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.676551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.705688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:27.758105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:43.411845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:37:43.433726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:38:05.438651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:38:05.454589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:38:05.481188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:38:05.497244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37394","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [c40c5e8f24acaac35ca06de5e702a8ee04f3e12e10d17eeddaf164cea9753db5] <==
	2025/11/26 19:39:19 GCP Auth Webhook started!
	2025/11/26 19:39:27 Ready to marshal response ...
	2025/11/26 19:39:27 Ready to write response ...
	2025/11/26 19:39:27 Ready to marshal response ...
	2025/11/26 19:39:27 Ready to write response ...
	2025/11/26 19:39:28 Ready to marshal response ...
	2025/11/26 19:39:28 Ready to write response ...
	2025/11/26 19:39:49 Ready to marshal response ...
	2025/11/26 19:39:49 Ready to write response ...
	2025/11/26 19:39:49 Ready to marshal response ...
	2025/11/26 19:39:49 Ready to write response ...
	2025/11/26 19:39:49 Ready to marshal response ...
	2025/11/26 19:39:49 Ready to write response ...
	2025/11/26 19:39:59 Ready to marshal response ...
	2025/11/26 19:39:59 Ready to write response ...
	
	
	==> kernel <==
	 19:40:01 up 22 min,  0 user,  load average: 2.42, 1.36, 0.56
	Linux addons-152801 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707] <==
	I1126 19:38:08.735299       1 metrics.go:72] Registering metrics
	I1126 19:38:08.735355       1 controller.go:711] "Syncing nftables rules"
	E1126 19:38:08.735571       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1126 19:38:17.135146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:38:17.135182       1 main.go:301] handling current node
	I1126 19:38:27.134580       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:38:27.134632       1 main.go:301] handling current node
	I1126 19:38:37.135298       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:38:37.135326       1 main.go:301] handling current node
	I1126 19:38:47.134780       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:38:47.134814       1 main.go:301] handling current node
	I1126 19:38:57.134408       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:38:57.134450       1 main.go:301] handling current node
	I1126 19:39:07.135070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:39:07.135106       1 main.go:301] handling current node
	I1126 19:39:17.134784       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:39:17.134854       1 main.go:301] handling current node
	I1126 19:39:27.134583       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:39:27.134624       1 main.go:301] handling current node
	I1126 19:39:37.135293       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:39:37.135405       1 main.go:301] handling current node
	I1126 19:39:47.141523       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:39:47.141560       1 main.go:301] handling current node
	I1126 19:39:57.134878       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:39:57.134911       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515] <==
	W1126 19:37:43.411697       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:37:43.426675       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1126 19:37:46.208331       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.110.245.89"}
	W1126 19:38:05.438123       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:38:05.453898       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:38:05.481173       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:38:05.496902       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1126 19:38:17.701561       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.245.89:443: connect: connection refused
	E1126 19:38:17.706676       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.245.89:443: connect: connection refused" logger="UnhandledError"
	W1126 19:38:17.707504       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.245.89:443: connect: connection refused
	E1126 19:38:17.707674       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.245.89:443: connect: connection refused" logger="UnhandledError"
	W1126 19:38:17.801241       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.245.89:443: connect: connection refused
	E1126 19:38:17.801283       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.245.89:443: connect: connection refused" logger="UnhandledError"
	E1126 19:38:34.655076       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.157.237:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.157.237:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.157.237:443: connect: connection refused" logger="UnhandledError"
	W1126 19:38:34.655248       1 handler_proxy.go:99] no RequestInfo found in the context
	E1126 19:38:34.655332       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1126 19:38:34.656452       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.157.237:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.157.237:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.157.237:443: connect: connection refused" logger="UnhandledError"
	E1126 19:38:34.661312       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.157.237:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.157.237:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.157.237:443: connect: connection refused" logger="UnhandledError"
	I1126 19:38:34.756765       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1126 19:39:36.288615       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58764: use of closed network connection
	E1126 19:39:36.526512       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58782: use of closed network connection
	E1126 19:39:36.670127       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58810: use of closed network connection
	
	
	==> kube-controller-manager [6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b] <==
	I1126 19:37:35.473994       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 19:37:35.474103       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 19:37:35.474148       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1126 19:37:35.474220       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 19:37:35.474292       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 19:37:35.474507       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 19:37:35.476558       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 19:37:35.477812       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 19:37:35.477894       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 19:37:35.477948       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 19:37:35.479899       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 19:37:35.481956       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 19:37:35.482915       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 19:37:35.482990       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 19:37:35.483021       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 19:37:35.483050       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 19:37:35.511020       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-152801" podCIDRs=["10.244.0.0/24"]
	E1126 19:38:05.431245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1126 19:38:05.431413       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1126 19:38:05.431457       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1126 19:38:05.463765       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1126 19:38:05.474228       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1126 19:38:05.532638       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:38:05.575038       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 19:38:20.462105       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375] <==
	I1126 19:37:36.845279       1 server_linux.go:53] "Using iptables proxy"
	I1126 19:37:36.921712       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 19:37:37.022480       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 19:37:37.022556       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1126 19:37:37.022638       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 19:37:37.237448       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 19:37:37.273758       1 server_linux.go:132] "Using iptables Proxier"
	I1126 19:37:37.524151       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 19:37:37.547615       1 server.go:527] "Version info" version="v1.34.1"
	I1126 19:37:37.547649       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 19:37:37.589274       1 config.go:200] "Starting service config controller"
	I1126 19:37:37.589297       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 19:37:37.589460       1 config.go:106] "Starting endpoint slice config controller"
	I1126 19:37:37.589466       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 19:37:37.589550       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 19:37:37.589554       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 19:37:37.600462       1 config.go:309] "Starting node config controller"
	I1126 19:37:37.600485       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 19:37:37.600493       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 19:37:37.689803       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 19:37:37.689841       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 19:37:37.689882       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353] <==
	E1126 19:37:28.534607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 19:37:28.534641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:37:28.534696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 19:37:28.534730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 19:37:28.534763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 19:37:28.534796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 19:37:28.538811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:37:28.538992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:37:28.539073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 19:37:29.385458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 19:37:29.385458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 19:37:29.399025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 19:37:29.403829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 19:37:29.403842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 19:37:29.419365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 19:37:29.481522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 19:37:29.595155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 19:37:29.624446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 19:37:29.636121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 19:37:29.651869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:37:29.674867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 19:37:29.712319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 19:37:29.776478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:37:29.818237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1126 19:37:31.594505       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 19:39:55 addons-152801 kubelet[1253]: I1126 19:39:55.055157    1253 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc9cf14a33395c9c58cfb097ac393320df458706f0145f21dfe7efb713eb8d5e"
	Nov 26 19:39:55 addons-152801 kubelet[1253]: E1126 19:39:55.057004    1253 status_manager.go:1018] "Failed to get status for pod" err="pods \"registry-test\" is forbidden: User \"system:node:addons-152801\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-152801' and this object" podUID="1ac34af6-ddfe-4210-9e3d-1403ba347a67" pod="default/registry-test"
	Nov 26 19:39:55 addons-152801 kubelet[1253]: E1126 19:39:55.058111    1253 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-create-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087\" is forbidden: User \"system:node:addons-152801\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-152801' and this object" podUID="aaeee2cf-690d-4ed1-8afe-d04adedb21cf" pod="local-path-storage/helper-pod-create-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087"
	Nov 26 19:39:55 addons-152801 kubelet[1253]: I1126 19:39:55.090813    1253 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac34af6-ddfe-4210-9e3d-1403ba347a67" path="/var/lib/kubelet/pods/1ac34af6-ddfe-4210-9e3d-1403ba347a67/volumes"
	Nov 26 19:39:55 addons-152801 kubelet[1253]: I1126 19:39:55.091249    1253 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaeee2cf-690d-4ed1-8afe-d04adedb21cf" path="/var/lib/kubelet/pods/aaeee2cf-690d-4ed1-8afe-d04adedb21cf/volumes"
	Nov 26 19:39:55 addons-152801 kubelet[1253]: I1126 19:39:55.988253    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/46f1716c-6f12-4a3e-892e-d4ce0ff42f12-gcp-creds\") pod \"test-local-path\" (UID: \"46f1716c-6f12-4a3e-892e-d4ce0ff42f12\") " pod="default/test-local-path"
	Nov 26 19:39:55 addons-152801 kubelet[1253]: I1126 19:39:55.988313    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6c7297e5-0e4c-403d-b89a-2e241166a087\" (UniqueName: \"kubernetes.io/host-path/46f1716c-6f12-4a3e-892e-d4ce0ff42f12-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087\") pod \"test-local-path\" (UID: \"46f1716c-6f12-4a3e-892e-d4ce0ff42f12\") " pod="default/test-local-path"
	Nov 26 19:39:55 addons-152801 kubelet[1253]: I1126 19:39:55.988339    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t6x2\" (UniqueName: \"kubernetes.io/projected/46f1716c-6f12-4a3e-892e-d4ce0ff42f12-kube-api-access-9t6x2\") pod \"test-local-path\" (UID: \"46f1716c-6f12-4a3e-892e-d4ce0ff42f12\") " pod="default/test-local-path"
	Nov 26 19:39:58 addons-152801 kubelet[1253]: I1126 19:39:58.203903    1253 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/46f1716c-6f12-4a3e-892e-d4ce0ff42f12-gcp-creds\") pod \"46f1716c-6f12-4a3e-892e-d4ce0ff42f12\" (UID: \"46f1716c-6f12-4a3e-892e-d4ce0ff42f12\") "
	Nov 26 19:39:58 addons-152801 kubelet[1253]: I1126 19:39:58.204425    1253 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/46f1716c-6f12-4a3e-892e-d4ce0ff42f12-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087\") pod \"46f1716c-6f12-4a3e-892e-d4ce0ff42f12\" (UID: \"46f1716c-6f12-4a3e-892e-d4ce0ff42f12\") "
	Nov 26 19:39:58 addons-152801 kubelet[1253]: I1126 19:39:58.204477    1253 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t6x2\" (UniqueName: \"kubernetes.io/projected/46f1716c-6f12-4a3e-892e-d4ce0ff42f12-kube-api-access-9t6x2\") pod \"46f1716c-6f12-4a3e-892e-d4ce0ff42f12\" (UID: \"46f1716c-6f12-4a3e-892e-d4ce0ff42f12\") "
	Nov 26 19:39:58 addons-152801 kubelet[1253]: I1126 19:39:58.203987    1253 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f1716c-6f12-4a3e-892e-d4ce0ff42f12-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "46f1716c-6f12-4a3e-892e-d4ce0ff42f12" (UID: "46f1716c-6f12-4a3e-892e-d4ce0ff42f12"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 26 19:39:58 addons-152801 kubelet[1253]: I1126 19:39:58.204679    1253 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f1716c-6f12-4a3e-892e-d4ce0ff42f12-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087" (OuterVolumeSpecName: "data") pod "46f1716c-6f12-4a3e-892e-d4ce0ff42f12" (UID: "46f1716c-6f12-4a3e-892e-d4ce0ff42f12"). InnerVolumeSpecName "pvc-6c7297e5-0e4c-403d-b89a-2e241166a087". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 26 19:39:58 addons-152801 kubelet[1253]: I1126 19:39:58.209116    1253 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f1716c-6f12-4a3e-892e-d4ce0ff42f12-kube-api-access-9t6x2" (OuterVolumeSpecName: "kube-api-access-9t6x2") pod "46f1716c-6f12-4a3e-892e-d4ce0ff42f12" (UID: "46f1716c-6f12-4a3e-892e-d4ce0ff42f12"). InnerVolumeSpecName "kube-api-access-9t6x2". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 26 19:39:58 addons-152801 kubelet[1253]: I1126 19:39:58.305123    1253 reconciler_common.go:299] "Volume detached for volume \"pvc-6c7297e5-0e4c-403d-b89a-2e241166a087\" (UniqueName: \"kubernetes.io/host-path/46f1716c-6f12-4a3e-892e-d4ce0ff42f12-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087\") on node \"addons-152801\" DevicePath \"\""
	Nov 26 19:39:58 addons-152801 kubelet[1253]: I1126 19:39:58.305177    1253 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9t6x2\" (UniqueName: \"kubernetes.io/projected/46f1716c-6f12-4a3e-892e-d4ce0ff42f12-kube-api-access-9t6x2\") on node \"addons-152801\" DevicePath \"\""
	Nov 26 19:39:58 addons-152801 kubelet[1253]: I1126 19:39:58.305190    1253 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/46f1716c-6f12-4a3e-892e-d4ce0ff42f12-gcp-creds\") on node \"addons-152801\" DevicePath \"\""
	Nov 26 19:39:59 addons-152801 kubelet[1253]: I1126 19:39:59.084423    1253 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07afe31cdbb0ed9c2dac864add2548675510beb725153fa8b9603581943b8745"
	Nov 26 19:39:59 addons-152801 kubelet[1253]: I1126 19:39:59.918991    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b548h\" (UniqueName: \"kubernetes.io/projected/95fa449c-899e-4d3a-9821-8e37609debbd-kube-api-access-b548h\") pod \"helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087\" (UID: \"95fa449c-899e-4d3a-9821-8e37609debbd\") " pod="local-path-storage/helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087"
	Nov 26 19:39:59 addons-152801 kubelet[1253]: I1126 19:39:59.919603    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/95fa449c-899e-4d3a-9821-8e37609debbd-gcp-creds\") pod \"helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087\" (UID: \"95fa449c-899e-4d3a-9821-8e37609debbd\") " pod="local-path-storage/helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087"
	Nov 26 19:39:59 addons-152801 kubelet[1253]: I1126 19:39:59.919774    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/95fa449c-899e-4d3a-9821-8e37609debbd-data\") pod \"helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087\" (UID: \"95fa449c-899e-4d3a-9821-8e37609debbd\") " pod="local-path-storage/helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087"
	Nov 26 19:39:59 addons-152801 kubelet[1253]: I1126 19:39:59.919894    1253 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/95fa449c-899e-4d3a-9821-8e37609debbd-script\") pod \"helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087\" (UID: \"95fa449c-899e-4d3a-9821-8e37609debbd\") " pod="local-path-storage/helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087"
	Nov 26 19:40:00 addons-152801 kubelet[1253]: W1126 19:40:00.158484    1253 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f8d1177ed55295d7d5ed7966fd7aa6346caad073d5f76df25982eaf0268c0ae/crio-5c03c3ffda7e0135b8de1a66c72bc010e14288c2854751bf093144148124c739 WatchSource:0}: Error finding container 5c03c3ffda7e0135b8de1a66c72bc010e14288c2854751bf093144148124c739: Status 404 returned error can't find the container with id 5c03c3ffda7e0135b8de1a66c72bc010e14288c2854751bf093144148124c739
	Nov 26 19:40:01 addons-152801 kubelet[1253]: I1126 19:40:01.087259    1253 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-scxrq" secret="" err="secret \"gcp-auth\" not found"
	Nov 26 19:40:01 addons-152801 kubelet[1253]: I1126 19:40:01.097312    1253 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f1716c-6f12-4a3e-892e-d4ce0ff42f12" path="/var/lib/kubelet/pods/46f1716c-6f12-4a3e-892e-d4ce0ff42f12/volumes"
	
	
	==> storage-provisioner [d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd] <==
	W1126 19:39:37.471099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:39.475061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:39.482136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:41.484775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:41.489500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:43.492533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:43.497088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:45.500944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:45.508041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:47.511777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:47.516172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:49.523679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:49.549249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:51.556170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:51.562533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:53.565363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:53.570520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:55.573047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:55.577895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:57.581289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:57.588205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:59.591522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:39:59.600418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:01.606370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:40:01.612610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-152801 -n addons-152801
helpers_test.go:269: (dbg) Run:  kubectl --context addons-152801 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-g8z27 ingress-nginx-admission-patch-xlj8c registry-creds-764b6fb674-hcfnw helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-152801 describe pod ingress-nginx-admission-create-g8z27 ingress-nginx-admission-patch-xlj8c registry-creds-764b6fb674-hcfnw helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-152801 describe pod ingress-nginx-admission-create-g8z27 ingress-nginx-admission-patch-xlj8c registry-creds-764b6fb674-hcfnw helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087: exit status 1 (130.15204ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-g8z27" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xlj8c" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-hcfnw" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-152801 describe pod ingress-nginx-admission-create-g8z27 ingress-nginx-admission-patch-xlj8c registry-creds-764b6fb674-hcfnw helper-pod-delete-pvc-6c7297e5-0e4c-403d-b89a-2e241166a087: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable headlamp --alsologtostderr -v=1: exit status 11 (316.581387ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:40:02.999786   12175 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:40:03.006595   12175 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:40:03.006659   12175 out.go:374] Setting ErrFile to fd 2...
	I1126 19:40:03.006680   12175 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:40:03.007113   12175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:40:03.007535   12175 mustload.go:66] Loading cluster: addons-152801
	I1126 19:40:03.008231   12175 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:40:03.008257   12175 addons.go:622] checking whether the cluster is paused
	I1126 19:40:03.008413   12175 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:40:03.008432   12175 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:40:03.008933   12175 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:40:03.030282   12175 ssh_runner.go:195] Run: systemctl --version
	I1126 19:40:03.030360   12175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:40:03.049797   12175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:40:03.160487   12175 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:40:03.160629   12175 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:40:03.199925   12175 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:40:03.199946   12175 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:40:03.199951   12175 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:40:03.199954   12175 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:40:03.199958   12175 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:40:03.199961   12175 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:40:03.199965   12175 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:40:03.199968   12175 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:40:03.199971   12175 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:40:03.199977   12175 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:40:03.199981   12175 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:40:03.199984   12175 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:40:03.199987   12175 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:40:03.199991   12175 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:40:03.199994   12175 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:40:03.200005   12175 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:40:03.200014   12175 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:40:03.200019   12175 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:40:03.200025   12175 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:40:03.200028   12175 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:40:03.200033   12175 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:40:03.200039   12175 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:40:03.200042   12175 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:40:03.200045   12175 cri.go:89] found id: ""
	I1126 19:40:03.200103   12175 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:40:03.216425   12175 out.go:203] 
	W1126 19:40:03.219557   12175 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:40:03.219589   12175 out.go:285] * 
	* 
	W1126 19:40:03.224419   12175 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:40:03.227594   12175 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (4.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-chzvk" [4bb137c4-1114-496e-b58a-62447e087fc0] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003842484s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (250.796261ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:39:58.598111   11467 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:39:58.598273   11467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:58.598279   11467 out.go:374] Setting ErrFile to fd 2...
	I1126 19:39:58.598284   11467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:58.598706   11467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:39:58.599042   11467 mustload.go:66] Loading cluster: addons-152801
	I1126 19:39:58.599665   11467 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:58.599677   11467 addons.go:622] checking whether the cluster is paused
	I1126 19:39:58.599802   11467 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:58.599813   11467 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:39:58.600770   11467 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:39:58.620542   11467 ssh_runner.go:195] Run: systemctl --version
	I1126 19:39:58.620601   11467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:39:58.638961   11467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:39:58.744229   11467 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:39:58.744336   11467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:39:58.773498   11467 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:39:58.773521   11467 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:39:58.773526   11467 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:39:58.773530   11467 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:39:58.773533   11467 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:39:58.773537   11467 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:39:58.773540   11467 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:39:58.773544   11467 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:39:58.773547   11467 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:39:58.773553   11467 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:39:58.773556   11467 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:39:58.773559   11467 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:39:58.773562   11467 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:39:58.773565   11467 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:39:58.773568   11467 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:39:58.773576   11467 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:39:58.773583   11467 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:39:58.773589   11467 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:39:58.773592   11467 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:39:58.773595   11467 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:39:58.773599   11467 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:39:58.773609   11467 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:39:58.773612   11467 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:39:58.773616   11467 cri.go:89] found id: ""
	I1126 19:39:58.773668   11467 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:39:58.788494   11467 out.go:203] 
	W1126 19:39:58.791430   11467 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:39:58.791455   11467 out.go:285] * 
	* 
	W1126 19:39:58.796144   11467 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:39:58.799037   11467 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-152801 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-152801 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-152801 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [46f1716c-6f12-4a3e-892e-d4ce0ff42f12] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [46f1716c-6f12-4a3e-892e-d4ce0ff42f12] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [46f1716c-6f12-4a3e-892e-d4ce0ff42f12] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003661451s
addons_test.go:967: (dbg) Run:  kubectl --context addons-152801 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 ssh "cat /opt/local-path-provisioner/pvc-6c7297e5-0e4c-403d-b89a-2e241166a087_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-152801 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-152801 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (761.053414ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:39:59.912044   11721 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:39:59.912271   11721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:59.912277   11721 out.go:374] Setting ErrFile to fd 2...
	I1126 19:39:59.912282   11721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:59.912587   11721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:39:59.912903   11721 mustload.go:66] Loading cluster: addons-152801
	I1126 19:39:59.913312   11721 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:59.913324   11721 addons.go:622] checking whether the cluster is paused
	I1126 19:39:59.913456   11721 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:59.913471   11721 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:39:59.914123   11721 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:39:59.937788   11721 ssh_runner.go:195] Run: systemctl --version
	I1126 19:39:59.937862   11721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:39:59.964318   11721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:40:00.130686   11721 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:40:00.130840   11721 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:40:00.406473   11721 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:40:00.406556   11721 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:40:00.406577   11721 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:40:00.406601   11721 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:40:00.406644   11721 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:40:00.406666   11721 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:40:00.406697   11721 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:40:00.406730   11721 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:40:00.406751   11721 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:40:00.406775   11721 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:40:00.406806   11721 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:40:00.406825   11721 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:40:00.406844   11721 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:40:00.406865   11721 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:40:00.406899   11721 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:40:00.406925   11721 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:40:00.406957   11721 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:40:00.406991   11721 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:40:00.407010   11721 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:40:00.407036   11721 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:40:00.407081   11721 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:40:00.407106   11721 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:40:00.407133   11721 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:40:00.407171   11721 cri.go:89] found id: ""
	I1126 19:40:00.407263   11721 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:40:00.546446   11721 out.go:203] 
	W1126 19:40:00.562490   11721 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:40:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:40:00.562523   11721 out.go:285] * 
	* 
	W1126 19:40:00.567742   11721 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:40:00.575564   11721 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (11.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-rrntc" [658d2994-5e58-41f4-b7ef-fbca089ee861] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003679775s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (255.797343ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:39:49.275894   11093 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:39:49.276065   11093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:49.276075   11093 out.go:374] Setting ErrFile to fd 2...
	I1126 19:39:49.276081   11093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:49.276324   11093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:39:49.276584   11093 mustload.go:66] Loading cluster: addons-152801
	I1126 19:39:49.276951   11093 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:49.276968   11093 addons.go:622] checking whether the cluster is paused
	I1126 19:39:49.277073   11093 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:49.277095   11093 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:39:49.277589   11093 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:39:49.294103   11093 ssh_runner.go:195] Run: systemctl --version
	I1126 19:39:49.294172   11093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:39:49.312168   11093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:39:49.424224   11093 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:39:49.424336   11093 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:39:49.455322   11093 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:39:49.455353   11093 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:39:49.455359   11093 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:39:49.455363   11093 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:39:49.455367   11093 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:39:49.455370   11093 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:39:49.455374   11093 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:39:49.455377   11093 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:39:49.455381   11093 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:39:49.455389   11093 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:39:49.455393   11093 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:39:49.455396   11093 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:39:49.455399   11093 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:39:49.455401   11093 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:39:49.455404   11093 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:39:49.455410   11093 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:39:49.455417   11093 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:39:49.455421   11093 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:39:49.455425   11093 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:39:49.455428   11093 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:39:49.455432   11093 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:39:49.455439   11093 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:39:49.455442   11093 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:39:49.455446   11093 cri.go:89] found id: ""
	I1126 19:39:49.455495   11093 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:39:49.469788   11093 out.go:203] 
	W1126 19:39:49.472683   11093 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:39:49.472710   11093 out.go:285] * 
	* 
	W1126 19:39:49.477447   11093 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:39:49.480261   11093 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-4wcfn" [3c1d6e09-5612-49d1-ad0a-e53e787ebaec] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003154592s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-152801 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-152801 addons disable yakd --alsologtostderr -v=1: exit status 11 (258.890938ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:39:43.015177   11020 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:39:43.015347   11020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:43.015366   11020 out.go:374] Setting ErrFile to fd 2...
	I1126 19:39:43.015372   11020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:39:43.015722   11020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:39:43.016607   11020 mustload.go:66] Loading cluster: addons-152801
	I1126 19:39:43.017058   11020 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:43.017085   11020 addons.go:622] checking whether the cluster is paused
	I1126 19:39:43.017200   11020 config.go:182] Loaded profile config "addons-152801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:39:43.017210   11020 host.go:66] Checking if "addons-152801" exists ...
	I1126 19:39:43.017757   11020 cli_runner.go:164] Run: docker container inspect addons-152801 --format={{.State.Status}}
	I1126 19:39:43.038765   11020 ssh_runner.go:195] Run: systemctl --version
	I1126 19:39:43.038826   11020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-152801
	I1126 19:39:43.062036   11020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/addons-152801/id_rsa Username:docker}
	I1126 19:39:43.165340   11020 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:39:43.165439   11020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:39:43.194400   11020 cri.go:89] found id: "5cdc59e6553811d585e4425dfe8bcea605bdfd3256533a6fe5b597fb75017056"
	I1126 19:39:43.194425   11020 cri.go:89] found id: "0d2525ad7c6f902e335c782d8f0fc79da426bb75017b1c9e899bc8aed1bbe3ee"
	I1126 19:39:43.194430   11020 cri.go:89] found id: "68f9098f874c12f78de41a91d6e4af7add154eee7ec76e2fe2a476669725a2da"
	I1126 19:39:43.194435   11020 cri.go:89] found id: "c7b9d1130078420a6523f7e34d73eb8d6f200c5f3655d29470e31229b85b1ee4"
	I1126 19:39:43.194438   11020 cri.go:89] found id: "a4e36f02d445a6f744743a3f5f8c96325744fff14e64d79fdb60c09fbf492f3e"
	I1126 19:39:43.194442   11020 cri.go:89] found id: "333ebda1f94e9725bb17bb30e1799c0db1d280213cb268e4c348bbd0de91a50c"
	I1126 19:39:43.194445   11020 cri.go:89] found id: "be6e4f7ecbd7cca6daae7f861da7ebb626146d510221773bdf17b489c4ba95c5"
	I1126 19:39:43.194448   11020 cri.go:89] found id: "357f60871c591dfeeeb3421cef368ab8ee51ddb0d18679e4a68be4b90b26b1c1"
	I1126 19:39:43.194451   11020 cri.go:89] found id: "bbda721ec7889dc87b2aaddba15c7e53e82efa6dfa34deee7383fefad54e80b2"
	I1126 19:39:43.194461   11020 cri.go:89] found id: "5aa817b9fa068d3b5f1ff6c79bbb53cc0ea7159fbe6e0892493ba168729000f7"
	I1126 19:39:43.194465   11020 cri.go:89] found id: "33e2dbaa04cd84d6849c2ef1d8d0de63f921526401a415aa8bc4e1136f635305"
	I1126 19:39:43.194468   11020 cri.go:89] found id: "67ccc4b888832a51ecede6ad7a3c750244a34aebf1efe04ba91d71b95e21b9c8"
	I1126 19:39:43.194472   11020 cri.go:89] found id: "e3af750d29e79fb14ead17b806691530575e5dc7f7552dc503012002b54788cb"
	I1126 19:39:43.194475   11020 cri.go:89] found id: "3cd75fe86fc631471b76efa8a570600fdfdbc6797c15b197c695c933033513aa"
	I1126 19:39:43.194478   11020 cri.go:89] found id: "f900f636f3c4de61ad35238077f39b5bdd30436cd87679c7961bc1433072180c"
	I1126 19:39:43.194487   11020 cri.go:89] found id: "d0021ecd91f068066e3eb10053942fcf7376f859f6319470f7aad4d7cb5cd0bd"
	I1126 19:39:43.194494   11020 cri.go:89] found id: "2c15569036061a9f83e6bce3d1d167f620508c0bf56d754d4faa70a8a892eb32"
	I1126 19:39:43.194499   11020 cri.go:89] found id: "4cfa09096b0865303b96c3f12ecdd8eb7d2a90f3c096730679d96e08b5c96707"
	I1126 19:39:43.194502   11020 cri.go:89] found id: "4f25a6570f326b6af22399a0c54f707ed1be4ebf3de0c4354f49aba394ea9375"
	I1126 19:39:43.194509   11020 cri.go:89] found id: "4365cc22027bb3be5223dca66251b164d02dd6f7e6a37987089fee289b512b72"
	I1126 19:39:43.194514   11020 cri.go:89] found id: "b21aa95449406f4aff4269318471f0dfc9e0b52cc19eaa0312f0aa951e334515"
	I1126 19:39:43.194517   11020 cri.go:89] found id: "899c0cef3d3c5561d2bd702415f0d36d93a0c68bd3550e04f829d3f99f0bb353"
	I1126 19:39:43.194520   11020 cri.go:89] found id: "6bd6a4e5eae309806cd5983d960e4f8a2a11af40d0f0ee4f48f7ed11c843421b"
	I1126 19:39:43.194522   11020 cri.go:89] found id: ""
	I1126 19:39:43.194574   11020 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 19:39:43.209366   11020 out.go:203] 
	W1126 19:39:43.211994   11020 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:39:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 19:39:43.212012   11020 out.go:285] * 
	* 
	W1126 19:39:43.216778   11020 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 19:39:43.219620   11020 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-152801 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-793215 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-793215 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-ncgw7" [f6519b52-f309-4387-b608-494ae623ee3f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1126 19:47:11.971580    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-793215 -n functional-793215
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-26 19:57:00.219882355 +0000 UTC m=+1292.075496418
functional_test.go:1645: (dbg) Run:  kubectl --context functional-793215 describe po hello-node-connect-7d85dfc575-ncgw7 -n default
functional_test.go:1645: (dbg) kubectl --context functional-793215 describe po hello-node-connect-7d85dfc575-ncgw7 -n default:
Name:             hello-node-connect-7d85dfc575-ncgw7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-793215/192.168.49.2
Start Time:       Wed, 26 Nov 2025 19:46:59 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qj6j (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4qj6j:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ncgw7 to functional-793215
Normal   Pulling    6m41s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m41s (x5 over 9m39s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m41s (x5 over 9m39s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m31s (x21 over 9m39s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m31s (x21 over 9m39s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-793215 logs hello-node-connect-7d85dfc575-ncgw7 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-793215 logs hello-node-connect-7d85dfc575-ncgw7 -n default: exit status 1 (118.839662ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ncgw7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-793215 logs hello-node-connect-7d85dfc575-ncgw7 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-793215 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-ncgw7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-793215/192.168.49.2
Start Time:       Wed, 26 Nov 2025 19:46:59 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qj6j (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4qj6j:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ncgw7 to functional-793215
Normal   Pulling    6m41s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m41s (x5 over 9m39s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m41s (x5 over 9m39s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m31s (x21 over 9m39s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m31s (x21 over 9m39s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-793215 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-793215 logs -l app=hello-node-connect: exit status 1 (85.485462ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-ncgw7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-793215 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-793215 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.187.49
IPs:                      10.97.187.49
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31439/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-793215
helpers_test.go:243: (dbg) docker inspect functional-793215:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c0fd1b38b65aed2e5390c72ba13fef93e5f9ecb0d77c3239329d94effd41bc1f",
	        "Created": "2025-11-26T19:43:59.104771377Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19792,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T19:43:59.161014053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/c0fd1b38b65aed2e5390c72ba13fef93e5f9ecb0d77c3239329d94effd41bc1f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c0fd1b38b65aed2e5390c72ba13fef93e5f9ecb0d77c3239329d94effd41bc1f/hostname",
	        "HostsPath": "/var/lib/docker/containers/c0fd1b38b65aed2e5390c72ba13fef93e5f9ecb0d77c3239329d94effd41bc1f/hosts",
	        "LogPath": "/var/lib/docker/containers/c0fd1b38b65aed2e5390c72ba13fef93e5f9ecb0d77c3239329d94effd41bc1f/c0fd1b38b65aed2e5390c72ba13fef93e5f9ecb0d77c3239329d94effd41bc1f-json.log",
	        "Name": "/functional-793215",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-793215:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-793215",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c0fd1b38b65aed2e5390c72ba13fef93e5f9ecb0d77c3239329d94effd41bc1f",
	                "LowerDir": "/var/lib/docker/overlay2/f33a90fea40eb5ee0d124b31694063e6d51ea661cba5b56ce11c6738c6b1a624-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f33a90fea40eb5ee0d124b31694063e6d51ea661cba5b56ce11c6738c6b1a624/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f33a90fea40eb5ee0d124b31694063e6d51ea661cba5b56ce11c6738c6b1a624/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f33a90fea40eb5ee0d124b31694063e6d51ea661cba5b56ce11c6738c6b1a624/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-793215",
	                "Source": "/var/lib/docker/volumes/functional-793215/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-793215",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-793215",
	                "name.minikube.sigs.k8s.io": "functional-793215",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67362fe596dae92b174b5d2f07a151103d2c68cf137a6b687a848dddb3213083",
	            "SandboxKey": "/var/run/docker/netns/67362fe596da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-793215": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:b7:e1:33:ac:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2b9d8968ee959154231c6a6a78ca0e3642dd580017ab465f0e738488ad349e87",
	                    "EndpointID": "a93dd878e7e80a84356beb304b34945d8efe293150fe8662f42aa37d42d399fe",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-793215",
	                        "c0fd1b38b65a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-793215 -n functional-793215
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-793215 logs -n 25: (1.499197109s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ ssh     │ functional-793215 ssh sudo crictl images                                                                 │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ ssh     │ functional-793215 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ ssh     │ functional-793215 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │                     │
	│ cache   │ functional-793215 cache reload                                                                           │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ ssh     │ functional-793215 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ kubectl │ functional-793215 kubectl -- --context functional-793215 get pods                                        │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ start   │ -p functional-793215 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ service │ invalid-svc -p functional-793215                                                                         │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │                     │
	│ ssh     │ functional-793215 ssh echo hello                                                                         │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ config  │ functional-793215 config unset cpus                                                                      │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ config  │ functional-793215 config get cpus                                                                        │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │                     │
	│ config  │ functional-793215 config set cpus 2                                                                      │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ config  │ functional-793215 config get cpus                                                                        │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ config  │ functional-793215 config unset cpus                                                                      │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ ssh     │ functional-793215 ssh cat /etc/hostname                                                                  │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ config  │ functional-793215 config get cpus                                                                        │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │                     │
	│ tunnel  │ functional-793215 tunnel --alsologtostderr                                                               │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │                     │
	│ tunnel  │ functional-793215 tunnel --alsologtostderr                                                               │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │                     │
	│ tunnel  │ functional-793215 tunnel --alsologtostderr                                                               │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │                     │
	│ addons  │ functional-793215 addons list                                                                            │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	│ addons  │ functional-793215 addons list -o json                                                                    │ functional-793215 │ jenkins │ v1.37.0 │ 26 Nov 25 19:46 UTC │ 26 Nov 25 19:46 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:46:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:46:07.756054   24166 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:46:07.758721   24166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:46:07.758751   24166 out.go:374] Setting ErrFile to fd 2...
	I1126 19:46:07.758755   24166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:46:07.759130   24166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:46:07.759621   24166 out.go:368] Setting JSON to false
	I1126 19:46:07.760477   24166 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1698,"bootTime":1764184670,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 19:46:07.760611   24166 start.go:143] virtualization:  
	I1126 19:46:07.763588   24166 out.go:179] * [functional-793215] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 19:46:07.767067   24166 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:46:07.767149   24166 notify.go:221] Checking for updates...
	I1126 19:46:07.772242   24166 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:46:07.774927   24166 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 19:46:07.777560   24166 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 19:46:07.780247   24166 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 19:46:07.782944   24166 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:46:07.786143   24166 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:46:07.786234   24166 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:46:07.810242   24166 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 19:46:07.810369   24166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:46:07.875224   24166 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-26 19:46:07.865233322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 19:46:07.875321   24166 docker.go:319] overlay module found
	I1126 19:46:07.878315   24166 out.go:179] * Using the docker driver based on existing profile
	I1126 19:46:07.881052   24166 start.go:309] selected driver: docker
	I1126 19:46:07.881060   24166 start.go:927] validating driver "docker" against &{Name:functional-793215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-793215 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:46:07.881161   24166 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:46:07.881254   24166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:46:07.952233   24166 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-26 19:46:07.943354318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 19:46:07.952637   24166 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 19:46:07.952662   24166 cni.go:84] Creating CNI manager for ""
	I1126 19:46:07.952716   24166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:46:07.952757   24166 start.go:353] cluster config:
	{Name:functional-793215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-793215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:46:07.955959   24166 out.go:179] * Starting "functional-793215" primary control-plane node in "functional-793215" cluster
	I1126 19:46:07.958694   24166 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 19:46:07.961571   24166 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 19:46:07.964883   24166 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:46:07.964919   24166 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 19:46:07.964927   24166 cache.go:65] Caching tarball of preloaded images
	I1126 19:46:07.964944   24166 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 19:46:07.965022   24166 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 19:46:07.965031   24166 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 19:46:07.965139   24166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/config.json ...
	I1126 19:46:07.987451   24166 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 19:46:07.987461   24166 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 19:46:07.987473   24166 cache.go:243] Successfully downloaded all kic artifacts
	I1126 19:46:07.987544   24166 start.go:360] acquireMachinesLock for functional-793215: {Name:mkdd64bbcb87bdda20a3d3a10476b14391f04c2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 19:46:07.987610   24166 start.go:364] duration metric: took 48.517µs to acquireMachinesLock for "functional-793215"
	I1126 19:46:07.987628   24166 start.go:96] Skipping create...Using existing machine configuration
	I1126 19:46:07.987633   24166 fix.go:54] fixHost starting: 
	I1126 19:46:07.987893   24166 cli_runner.go:164] Run: docker container inspect functional-793215 --format={{.State.Status}}
	I1126 19:46:08.008858   24166 fix.go:112] recreateIfNeeded on functional-793215: state=Running err=<nil>
	W1126 19:46:08.008889   24166 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 19:46:08.012095   24166 out.go:252] * Updating the running docker "functional-793215" container ...
	I1126 19:46:08.012133   24166 machine.go:94] provisionDockerMachine start ...
	I1126 19:46:08.012227   24166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
	I1126 19:46:08.034702   24166 main.go:143] libmachine: Using SSH client type: native
	I1126 19:46:08.035021   24166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1126 19:46:08.035027   24166 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 19:46:08.185381   24166 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-793215
	
	I1126 19:46:08.185395   24166 ubuntu.go:182] provisioning hostname "functional-793215"
	I1126 19:46:08.185454   24166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
	I1126 19:46:08.203962   24166 main.go:143] libmachine: Using SSH client type: native
	I1126 19:46:08.204268   24166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1126 19:46:08.204276   24166 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-793215 && echo "functional-793215" | sudo tee /etc/hostname
	I1126 19:46:08.365446   24166 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-793215
	
	I1126 19:46:08.365513   24166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
	I1126 19:46:08.382567   24166 main.go:143] libmachine: Using SSH client type: native
	I1126 19:46:08.382875   24166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1126 19:46:08.382889   24166 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-793215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-793215/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-793215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 19:46:08.530455   24166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 19:46:08.530470   24166 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 19:46:08.530500   24166 ubuntu.go:190] setting up certificates
	I1126 19:46:08.530508   24166 provision.go:84] configureAuth start
	I1126 19:46:08.530572   24166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-793215
	I1126 19:46:08.547171   24166 provision.go:143] copyHostCerts
	I1126 19:46:08.547239   24166 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 19:46:08.547255   24166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 19:46:08.547332   24166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 19:46:08.547424   24166 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 19:46:08.547428   24166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 19:46:08.547452   24166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 19:46:08.547502   24166 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 19:46:08.547506   24166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 19:46:08.547527   24166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 19:46:08.547570   24166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.functional-793215 san=[127.0.0.1 192.168.49.2 functional-793215 localhost minikube]
	I1126 19:46:08.706558   24166 provision.go:177] copyRemoteCerts
	I1126 19:46:08.706620   24166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 19:46:08.706658   24166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
	I1126 19:46:08.723602   24166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
	I1126 19:46:08.825681   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 19:46:08.842985   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 19:46:08.860960   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 19:46:08.877979   24166 provision.go:87] duration metric: took 347.451774ms to configureAuth
	I1126 19:46:08.877996   24166 ubuntu.go:206] setting minikube options for container-runtime
	I1126 19:46:08.878199   24166 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:46:08.878304   24166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
	I1126 19:46:08.895335   24166 main.go:143] libmachine: Using SSH client type: native
	I1126 19:46:08.895635   24166 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1126 19:46:08.895648   24166 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 19:46:14.325835   24166 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 19:46:14.325848   24166 machine.go:97] duration metric: took 6.313709162s to provisionDockerMachine
	I1126 19:46:14.325858   24166 start.go:293] postStartSetup for "functional-793215" (driver="docker")
	I1126 19:46:14.325868   24166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 19:46:14.325946   24166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 19:46:14.325988   24166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
	I1126 19:46:14.343507   24166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
	I1126 19:46:14.445439   24166 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 19:46:14.448602   24166 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 19:46:14.448623   24166 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 19:46:14.448632   24166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 19:46:14.448698   24166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 19:46:14.448797   24166 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 19:46:14.448883   24166 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/test/nested/copy/4129/hosts -> hosts in /etc/test/nested/copy/4129
	I1126 19:46:14.448926   24166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4129
	I1126 19:46:14.456325   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 19:46:14.472923   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/test/nested/copy/4129/hosts --> /etc/test/nested/copy/4129/hosts (40 bytes)
	I1126 19:46:14.491036   24166 start.go:296] duration metric: took 165.163708ms for postStartSetup
	I1126 19:46:14.491115   24166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 19:46:14.491153   24166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
	I1126 19:46:14.507883   24166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
	I1126 19:46:14.611398   24166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 19:46:14.616541   24166 fix.go:56] duration metric: took 6.628902275s for fixHost
	I1126 19:46:14.616557   24166 start.go:83] releasing machines lock for "functional-793215", held for 6.628938952s
	I1126 19:46:14.616625   24166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-793215
	I1126 19:46:14.633403   24166 ssh_runner.go:195] Run: cat /version.json
	I1126 19:46:14.633456   24166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
	I1126 19:46:14.633488   24166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 19:46:14.633607   24166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
	I1126 19:46:14.651074   24166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
	I1126 19:46:14.652873   24166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
	I1126 19:46:14.843019   24166 ssh_runner.go:195] Run: systemctl --version
	I1126 19:46:14.849331   24166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 19:46:14.885721   24166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 19:46:14.889794   24166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 19:46:14.889850   24166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 19:46:14.897274   24166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 19:46:14.897288   24166 start.go:496] detecting cgroup driver to use...
	I1126 19:46:14.897318   24166 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 19:46:14.897360   24166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 19:46:14.912453   24166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 19:46:14.925383   24166 docker.go:218] disabling cri-docker service (if available) ...
	I1126 19:46:14.925451   24166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 19:46:14.941087   24166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 19:46:14.954009   24166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 19:46:15.103314   24166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 19:46:15.237025   24166 docker.go:234] disabling docker service ...
	I1126 19:46:15.237078   24166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 19:46:15.252631   24166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 19:46:15.265004   24166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 19:46:15.403533   24166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 19:46:15.537512   24166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 19:46:15.551643   24166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 19:46:15.567350   24166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 19:46:15.567433   24166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:46:15.576935   24166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 19:46:15.577001   24166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:46:15.585571   24166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:46:15.594274   24166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:46:15.602862   24166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 19:46:15.610958   24166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:46:15.619577   24166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:46:15.627524   24166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 19:46:15.635960   24166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 19:46:15.643381   24166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 19:46:15.650585   24166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:46:15.787512   24166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 19:46:16.039755   24166 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 19:46:16.039822   24166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 19:46:16.043709   24166 start.go:564] Will wait 60s for crictl version
	I1126 19:46:16.043770   24166 ssh_runner.go:195] Run: which crictl
	I1126 19:46:16.047412   24166 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 19:46:16.072155   24166 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 19:46:16.072241   24166 ssh_runner.go:195] Run: crio --version
	I1126 19:46:16.100522   24166 ssh_runner.go:195] Run: crio --version
	I1126 19:46:16.131651   24166 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 19:46:16.134621   24166 cli_runner.go:164] Run: docker network inspect functional-793215 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 19:46:16.150484   24166 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 19:46:16.157402   24166 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1126 19:46:16.160325   24166 kubeadm.go:884] updating cluster {Name:functional-793215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-793215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 19:46:16.160452   24166 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:46:16.160514   24166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 19:46:16.194929   24166 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 19:46:16.194940   24166 crio.go:433] Images already preloaded, skipping extraction
	I1126 19:46:16.194996   24166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 19:46:16.222997   24166 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 19:46:16.223009   24166 cache_images.go:86] Images are preloaded, skipping loading
	I1126 19:46:16.223016   24166 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1126 19:46:16.223110   24166 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-793215 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-793215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 19:46:16.223184   24166 ssh_runner.go:195] Run: crio config
	I1126 19:46:16.294165   24166 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1126 19:46:16.294188   24166 cni.go:84] Creating CNI manager for ""
	I1126 19:46:16.294195   24166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:46:16.294208   24166 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 19:46:16.294228   24166 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-793215 NodeName:functional-793215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 19:46:16.294342   24166 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-793215"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 19:46:16.294423   24166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 19:46:16.302205   24166 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 19:46:16.302261   24166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 19:46:16.309858   24166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1126 19:46:16.322690   24166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 19:46:16.334543   24166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1126 19:46:16.346495   24166 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1126 19:46:16.350101   24166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:46:16.491419   24166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 19:46:16.504166   24166 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215 for IP: 192.168.49.2
	I1126 19:46:16.504177   24166 certs.go:195] generating shared ca certs ...
	I1126 19:46:16.504201   24166 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:46:16.504362   24166 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 19:46:16.504420   24166 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 19:46:16.504426   24166 certs.go:257] generating profile certs ...
	I1126 19:46:16.504531   24166 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.key
	I1126 19:46:16.504585   24166 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/apiserver.key.a63f74b3
	I1126 19:46:16.504629   24166 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/proxy-client.key
	I1126 19:46:16.504737   24166 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 19:46:16.504777   24166 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 19:46:16.504792   24166 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 19:46:16.504825   24166 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 19:46:16.504855   24166 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 19:46:16.504877   24166 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 19:46:16.504918   24166 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 19:46:16.505557   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 19:46:16.523225   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 19:46:16.542605   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 19:46:16.560408   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 19:46:16.577570   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1126 19:46:16.594057   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 19:46:16.611154   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 19:46:16.628189   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 19:46:16.644530   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 19:46:16.661712   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 19:46:16.679420   24166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 19:46:16.696981   24166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 19:46:16.709378   24166 ssh_runner.go:195] Run: openssl version
	I1126 19:46:16.715602   24166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 19:46:16.723842   24166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:46:16.727511   24166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:46:16.727565   24166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 19:46:16.768200   24166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 19:46:16.775837   24166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 19:46:16.783867   24166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 19:46:16.787166   24166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 19:46:16.787217   24166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 19:46:16.828469   24166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 19:46:16.836020   24166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 19:46:16.843865   24166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 19:46:16.847830   24166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 19:46:16.847881   24166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 19:46:16.888444   24166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 19:46:16.896094   24166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 19:46:16.899806   24166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 19:46:16.940216   24166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 19:46:16.980925   24166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 19:46:17.022680   24166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 19:46:17.064112   24166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 19:46:17.104746   24166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 19:46:17.145284   24166 kubeadm.go:401] StartCluster: {Name:functional-793215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-793215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:46:17.145358   24166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 19:46:17.145423   24166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:46:17.172065   24166 cri.go:89] found id: "2743d5b0729a9cf8b3c65f8424eef4b22a31460f7b6f4f2121fe0199144c6bff"
	I1126 19:46:17.172077   24166 cri.go:89] found id: "6e948b06b6718505894e442e1e49f2b691b7b1db860ea392889d2edf82f01de6"
	I1126 19:46:17.172080   24166 cri.go:89] found id: "ad7913dc9e1607d20e0cbd6d3d44ef340579bf65174cd3b27035835a8946e30b"
	I1126 19:46:17.172083   24166 cri.go:89] found id: "7d4984954b36bfbd050c3658d919f819dc9ce19aea48e5a0290dcfac22aeace0"
	I1126 19:46:17.172089   24166 cri.go:89] found id: "2b8bec3e2ec779fc901e3d90160f2e4e0448cc36e348eac02fcb532474dcbee1"
	I1126 19:46:17.172093   24166 cri.go:89] found id: "ede234b542a19f1aa9c02681ee1d5fbe6c57037a9260293c3b1852c2916f145c"
	I1126 19:46:17.172095   24166 cri.go:89] found id: "6a7655e3d11173fa378779756eb939bde8cea151854024de671595e3f9e4bed4"
	I1126 19:46:17.172097   24166 cri.go:89] found id: "38200cea3b3ff544ee0426a24f7a6f86fbabbe946f9559f929dfc5706fd07aa0"
	I1126 19:46:17.172099   24166 cri.go:89] found id: "8b2b7941061ee9d86acb83ddae50bb070eb55c44af1efd196ac7ba0877081596"
	I1126 19:46:17.172104   24166 cri.go:89] found id: "0535e084094ab8ac9bbac0816f134b170eca20a4436f3953354dcc8a22af70f9"
	I1126 19:46:17.172106   24166 cri.go:89] found id: "1be6b8c64569e2aadf1df96661c8171e9c7c6db016de01404c03fad7266eee83"
	I1126 19:46:17.172109   24166 cri.go:89] found id: "394f6ed4921251e149b31e86690a353a08cf217e47b6f6ff3259e36f5ec61a58"
	I1126 19:46:17.172111   24166 cri.go:89] found id: "cda4cdded5d608694cec946b88fc3759d6c57bf1faf6c133be51b0e264d5e6bd"
	I1126 19:46:17.172113   24166 cri.go:89] found id: "556cae71b6231b50df23dba8310b06caefdb84ad4d53c7d364ec59f5f0180c32"
	I1126 19:46:17.172115   24166 cri.go:89] found id: "d8f1f353d1e5e1d9b8ba926cf559ac6ea1922c1a60c6bb6c67d713abbd707896"
	I1126 19:46:17.172119   24166 cri.go:89] found id: "d56609cc27b8046d01aba9e8815a4b98d12521bd1f12436865b7e8c4582ed9f0"
	I1126 19:46:17.172121   24166 cri.go:89] found id: ""
	I1126 19:46:17.172168   24166 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 19:46:17.182477   24166 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:46:17Z" level=error msg="open /run/runc: no such file or directory"
	I1126 19:46:17.182546   24166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 19:46:17.189913   24166 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 19:46:17.189941   24166 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 19:46:17.189991   24166 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 19:46:17.197046   24166 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 19:46:17.197525   24166 kubeconfig.go:125] found "functional-793215" server: "https://192.168.49.2:8441"
	I1126 19:46:17.198766   24166 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 19:46:17.206342   24166 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-26 19:44:08.402976483 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-26 19:46:16.338557025 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1126 19:46:17.206354   24166 kubeadm.go:1161] stopping kube-system containers ...
	I1126 19:46:17.206367   24166 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1126 19:46:17.206420   24166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 19:46:17.232975   24166 cri.go:89] found id: "2743d5b0729a9cf8b3c65f8424eef4b22a31460f7b6f4f2121fe0199144c6bff"
	I1126 19:46:17.232986   24166 cri.go:89] found id: "6e948b06b6718505894e442e1e49f2b691b7b1db860ea392889d2edf82f01de6"
	I1126 19:46:17.232989   24166 cri.go:89] found id: "ad7913dc9e1607d20e0cbd6d3d44ef340579bf65174cd3b27035835a8946e30b"
	I1126 19:46:17.232992   24166 cri.go:89] found id: "7d4984954b36bfbd050c3658d919f819dc9ce19aea48e5a0290dcfac22aeace0"
	I1126 19:46:17.232994   24166 cri.go:89] found id: "2b8bec3e2ec779fc901e3d90160f2e4e0448cc36e348eac02fcb532474dcbee1"
	I1126 19:46:17.232996   24166 cri.go:89] found id: "ede234b542a19f1aa9c02681ee1d5fbe6c57037a9260293c3b1852c2916f145c"
	I1126 19:46:17.232999   24166 cri.go:89] found id: "6a7655e3d11173fa378779756eb939bde8cea151854024de671595e3f9e4bed4"
	I1126 19:46:17.233001   24166 cri.go:89] found id: "38200cea3b3ff544ee0426a24f7a6f86fbabbe946f9559f929dfc5706fd07aa0"
	I1126 19:46:17.233003   24166 cri.go:89] found id: "8b2b7941061ee9d86acb83ddae50bb070eb55c44af1efd196ac7ba0877081596"
	I1126 19:46:17.233008   24166 cri.go:89] found id: "0535e084094ab8ac9bbac0816f134b170eca20a4436f3953354dcc8a22af70f9"
	I1126 19:46:17.233019   24166 cri.go:89] found id: "1be6b8c64569e2aadf1df96661c8171e9c7c6db016de01404c03fad7266eee83"
	I1126 19:46:17.233023   24166 cri.go:89] found id: "394f6ed4921251e149b31e86690a353a08cf217e47b6f6ff3259e36f5ec61a58"
	I1126 19:46:17.233025   24166 cri.go:89] found id: "cda4cdded5d608694cec946b88fc3759d6c57bf1faf6c133be51b0e264d5e6bd"
	I1126 19:46:17.233026   24166 cri.go:89] found id: "556cae71b6231b50df23dba8310b06caefdb84ad4d53c7d364ec59f5f0180c32"
	I1126 19:46:17.233028   24166 cri.go:89] found id: "d8f1f353d1e5e1d9b8ba926cf559ac6ea1922c1a60c6bb6c67d713abbd707896"
	I1126 19:46:17.233032   24166 cri.go:89] found id: "d56609cc27b8046d01aba9e8815a4b98d12521bd1f12436865b7e8c4582ed9f0"
	I1126 19:46:17.233034   24166 cri.go:89] found id: ""
	I1126 19:46:17.233039   24166 cri.go:252] Stopping containers: [2743d5b0729a9cf8b3c65f8424eef4b22a31460f7b6f4f2121fe0199144c6bff 6e948b06b6718505894e442e1e49f2b691b7b1db860ea392889d2edf82f01de6 ad7913dc9e1607d20e0cbd6d3d44ef340579bf65174cd3b27035835a8946e30b 7d4984954b36bfbd050c3658d919f819dc9ce19aea48e5a0290dcfac22aeace0 2b8bec3e2ec779fc901e3d90160f2e4e0448cc36e348eac02fcb532474dcbee1 ede234b542a19f1aa9c02681ee1d5fbe6c57037a9260293c3b1852c2916f145c 6a7655e3d11173fa378779756eb939bde8cea151854024de671595e3f9e4bed4 38200cea3b3ff544ee0426a24f7a6f86fbabbe946f9559f929dfc5706fd07aa0 8b2b7941061ee9d86acb83ddae50bb070eb55c44af1efd196ac7ba0877081596 0535e084094ab8ac9bbac0816f134b170eca20a4436f3953354dcc8a22af70f9 1be6b8c64569e2aadf1df96661c8171e9c7c6db016de01404c03fad7266eee83 394f6ed4921251e149b31e86690a353a08cf217e47b6f6ff3259e36f5ec61a58 cda4cdded5d608694cec946b88fc3759d6c57bf1faf6c133be51b0e264d5e6bd 556cae71b6231b50df23dba8310b06caefdb84ad4d53c7d364ec59f5f0180c32 d8f1f353d1e5e1d9b8ba926cf559ac6ea1922c1a6
0c6bb6c67d713abbd707896 d56609cc27b8046d01aba9e8815a4b98d12521bd1f12436865b7e8c4582ed9f0]
	I1126 19:46:17.233093   24166 ssh_runner.go:195] Run: which crictl
	I1126 19:46:17.236440   24166 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 2743d5b0729a9cf8b3c65f8424eef4b22a31460f7b6f4f2121fe0199144c6bff 6e948b06b6718505894e442e1e49f2b691b7b1db860ea392889d2edf82f01de6 ad7913dc9e1607d20e0cbd6d3d44ef340579bf65174cd3b27035835a8946e30b 7d4984954b36bfbd050c3658d919f819dc9ce19aea48e5a0290dcfac22aeace0 2b8bec3e2ec779fc901e3d90160f2e4e0448cc36e348eac02fcb532474dcbee1 ede234b542a19f1aa9c02681ee1d5fbe6c57037a9260293c3b1852c2916f145c 6a7655e3d11173fa378779756eb939bde8cea151854024de671595e3f9e4bed4 38200cea3b3ff544ee0426a24f7a6f86fbabbe946f9559f929dfc5706fd07aa0 8b2b7941061ee9d86acb83ddae50bb070eb55c44af1efd196ac7ba0877081596 0535e084094ab8ac9bbac0816f134b170eca20a4436f3953354dcc8a22af70f9 1be6b8c64569e2aadf1df96661c8171e9c7c6db016de01404c03fad7266eee83 394f6ed4921251e149b31e86690a353a08cf217e47b6f6ff3259e36f5ec61a58 cda4cdded5d608694cec946b88fc3759d6c57bf1faf6c133be51b0e264d5e6bd 556cae71b6231b50df23dba8310b06caefdb84ad4d53c7d364ec59f5f0180c32 d8f1f3
53d1e5e1d9b8ba926cf559ac6ea1922c1a60c6bb6c67d713abbd707896 d56609cc27b8046d01aba9e8815a4b98d12521bd1f12436865b7e8c4582ed9f0
	I1126 19:46:17.336890   24166 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1126 19:46:17.452874   24166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 19:46:17.460469   24166 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 26 19:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Nov 26 19:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 26 19:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Nov 26 19:44 /etc/kubernetes/scheduler.conf
	
	I1126 19:46:17.460532   24166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1126 19:46:17.467789   24166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1126 19:46:17.474904   24166 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1126 19:46:17.474963   24166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 19:46:17.482135   24166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1126 19:46:17.489478   24166 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1126 19:46:17.489529   24166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 19:46:17.497173   24166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1126 19:46:17.504464   24166 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1126 19:46:17.504514   24166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 19:46:17.511745   24166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 19:46:17.519327   24166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1126 19:46:17.568589   24166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1126 19:46:21.212457   24166 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.643843949s)
	I1126 19:46:21.212513   24166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1126 19:46:21.426076   24166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1126 19:46:21.488587   24166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1126 19:46:21.555847   24166 api_server.go:52] waiting for apiserver process to appear ...
	I1126 19:46:21.555948   24166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:46:22.056086   24166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:46:22.556097   24166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:46:22.572987   24166 api_server.go:72] duration metric: took 1.017137966s to wait for apiserver process to appear ...
	I1126 19:46:22.573014   24166 api_server.go:88] waiting for apiserver healthz status ...
	I1126 19:46:22.573032   24166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1126 19:46:26.607865   24166 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1126 19:46:26.607881   24166 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1126 19:46:26.607893   24166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1126 19:46:26.641629   24166 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1126 19:46:26.641646   24166 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1126 19:46:27.073174   24166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1126 19:46:27.082258   24166 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 19:46:27.082276   24166 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 19:46:27.574013   24166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1126 19:46:27.597349   24166 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 19:46:27.597371   24166 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 19:46:28.073200   24166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1126 19:46:28.082615   24166 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1126 19:46:28.097065   24166 api_server.go:141] control plane version: v1.34.1
	I1126 19:46:28.097099   24166 api_server.go:131] duration metric: took 5.524078357s to wait for apiserver health ...
	I1126 19:46:28.097117   24166 cni.go:84] Creating CNI manager for ""
	I1126 19:46:28.097124   24166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:46:28.100629   24166 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1126 19:46:28.103670   24166 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 19:46:28.108553   24166 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 19:46:28.108564   24166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 19:46:28.123499   24166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 19:46:28.591282   24166 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 19:46:28.595458   24166 system_pods.go:59] 8 kube-system pods found
	I1126 19:46:28.595481   24166 system_pods.go:61] "coredns-66bc5c9577-g24xc" [5392adab-6855-49ac-bb16-2cbc2584266c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:46:28.595488   24166 system_pods.go:61] "etcd-functional-793215" [80ae3926-14ee-4c2e-9205-ef62a66f4751] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 19:46:28.595494   24166 system_pods.go:61] "kindnet-k9w7g" [f8e110c1-99f9-4c62-a2c3-d4fe97e941cb] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 19:46:28.595505   24166 system_pods.go:61] "kube-apiserver-functional-793215" [463826be-1410-49ab-8085-c8a32cd268f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 19:46:28.595510   24166 system_pods.go:61] "kube-controller-manager-functional-793215" [55532c74-b9e5-4f24-abc2-0787bee5986d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 19:46:28.595514   24166 system_pods.go:61] "kube-proxy-s89lz" [3c4da607-cab2-45b2-b247-8462c6f11ec8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 19:46:28.595518   24166 system_pods.go:61] "kube-scheduler-functional-793215" [70f58041-3fbe-4065-87ce-bb89198b5478] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 19:46:28.595524   24166 system_pods.go:61] "storage-provisioner" [cdfa27a0-3423-43ff-bdea-ded6c65fa201] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 19:46:28.595529   24166 system_pods.go:74] duration metric: took 4.237409ms to wait for pod list to return data ...
	I1126 19:46:28.595539   24166 node_conditions.go:102] verifying NodePressure condition ...
	I1126 19:46:28.598507   24166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 19:46:28.598527   24166 node_conditions.go:123] node cpu capacity is 2
	I1126 19:46:28.598537   24166 node_conditions.go:105] duration metric: took 2.995146ms to run NodePressure ...
	I1126 19:46:28.598593   24166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1126 19:46:28.887677   24166 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1126 19:46:28.892602   24166 kubeadm.go:744] kubelet initialised
	I1126 19:46:28.892612   24166 kubeadm.go:745] duration metric: took 4.922914ms waiting for restarted kubelet to initialise ...
	I1126 19:46:28.892626   24166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 19:46:28.903612   24166 ops.go:34] apiserver oom_adj: -16
	I1126 19:46:28.903624   24166 kubeadm.go:602] duration metric: took 11.713678048s to restartPrimaryControlPlane
	I1126 19:46:28.903632   24166 kubeadm.go:403] duration metric: took 11.758358878s to StartCluster
	I1126 19:46:28.903645   24166 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:46:28.903703   24166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 19:46:28.904300   24166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 19:46:28.904503   24166 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 19:46:28.904811   24166 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:46:28.904862   24166 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 19:46:28.904955   24166 addons.go:70] Setting storage-provisioner=true in profile "functional-793215"
	I1126 19:46:28.904971   24166 addons.go:239] Setting addon storage-provisioner=true in "functional-793215"
	W1126 19:46:28.904983   24166 addons.go:248] addon storage-provisioner should already be in state true
	I1126 19:46:28.904994   24166 addons.go:70] Setting default-storageclass=true in profile "functional-793215"
	I1126 19:46:28.905001   24166 host.go:66] Checking if "functional-793215" exists ...
	I1126 19:46:28.905008   24166 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-793215"
	I1126 19:46:28.905362   24166 cli_runner.go:164] Run: docker container inspect functional-793215 --format={{.State.Status}}
	I1126 19:46:28.905649   24166 cli_runner.go:164] Run: docker container inspect functional-793215 --format={{.State.Status}}
	I1126 19:46:28.907552   24166 out.go:179] * Verifying Kubernetes components...
	I1126 19:46:28.910679   24166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 19:46:28.954771   24166 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 19:46:28.956071   24166 addons.go:239] Setting addon default-storageclass=true in "functional-793215"
	W1126 19:46:28.956080   24166 addons.go:248] addon default-storageclass should already be in state true
	I1126 19:46:28.956101   24166 host.go:66] Checking if "functional-793215" exists ...
	I1126 19:46:28.956507   24166 cli_runner.go:164] Run: docker container inspect functional-793215 --format={{.State.Status}}
	I1126 19:46:28.961726   24166 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 19:46:28.961742   24166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 19:46:28.961817   24166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
	I1126 19:46:29.003000   24166 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 19:46:29.003012   24166 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 19:46:29.003068   24166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
	I1126 19:46:29.020640   24166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
	I1126 19:46:29.040963   24166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
	I1126 19:46:29.140781   24166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 19:46:29.153285   24166 node_ready.go:35] waiting up to 6m0s for node "functional-793215" to be "Ready" ...
	I1126 19:46:29.156960   24166 node_ready.go:49] node "functional-793215" is "Ready"
	I1126 19:46:29.156975   24166 node_ready.go:38] duration metric: took 3.670682ms for node "functional-793215" to be "Ready" ...
	I1126 19:46:29.156986   24166 api_server.go:52] waiting for apiserver process to appear ...
	I1126 19:46:29.157046   24166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 19:46:29.179477   24166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 19:46:29.191367   24166 api_server.go:72] duration metric: took 286.838865ms to wait for apiserver process to appear ...
	I1126 19:46:29.191382   24166 api_server.go:88] waiting for apiserver healthz status ...
	I1126 19:46:29.191401   24166 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1126 19:46:29.200728   24166 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1126 19:46:29.201621   24166 api_server.go:141] control plane version: v1.34.1
	I1126 19:46:29.201636   24166 api_server.go:131] duration metric: took 10.247674ms to wait for apiserver health ...
	I1126 19:46:29.201644   24166 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 19:46:29.204694   24166 system_pods.go:59] 8 kube-system pods found
	I1126 19:46:29.204711   24166 system_pods.go:61] "coredns-66bc5c9577-g24xc" [5392adab-6855-49ac-bb16-2cbc2584266c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:46:29.204717   24166 system_pods.go:61] "etcd-functional-793215" [80ae3926-14ee-4c2e-9205-ef62a66f4751] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 19:46:29.204722   24166 system_pods.go:61] "kindnet-k9w7g" [f8e110c1-99f9-4c62-a2c3-d4fe97e941cb] Running
	I1126 19:46:29.204728   24166 system_pods.go:61] "kube-apiserver-functional-793215" [463826be-1410-49ab-8085-c8a32cd268f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 19:46:29.204733   24166 system_pods.go:61] "kube-controller-manager-functional-793215" [55532c74-b9e5-4f24-abc2-0787bee5986d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 19:46:29.204737   24166 system_pods.go:61] "kube-proxy-s89lz" [3c4da607-cab2-45b2-b247-8462c6f11ec8] Running
	I1126 19:46:29.204744   24166 system_pods.go:61] "kube-scheduler-functional-793215" [70f58041-3fbe-4065-87ce-bb89198b5478] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 19:46:29.204747   24166 system_pods.go:61] "storage-provisioner" [cdfa27a0-3423-43ff-bdea-ded6c65fa201] Running
	I1126 19:46:29.204752   24166 system_pods.go:74] duration metric: took 3.103881ms to wait for pod list to return data ...
	I1126 19:46:29.204758   24166 default_sa.go:34] waiting for default service account to be created ...
	I1126 19:46:29.207192   24166 default_sa.go:45] found service account: "default"
	I1126 19:46:29.207202   24166 default_sa.go:55] duration metric: took 2.440489ms for default service account to be created ...
	I1126 19:46:29.207210   24166 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 19:46:29.209706   24166 system_pods.go:86] 8 kube-system pods found
	I1126 19:46:29.209721   24166 system_pods.go:89] "coredns-66bc5c9577-g24xc" [5392adab-6855-49ac-bb16-2cbc2584266c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 19:46:29.209728   24166 system_pods.go:89] "etcd-functional-793215" [80ae3926-14ee-4c2e-9205-ef62a66f4751] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 19:46:29.209733   24166 system_pods.go:89] "kindnet-k9w7g" [f8e110c1-99f9-4c62-a2c3-d4fe97e941cb] Running
	I1126 19:46:29.209739   24166 system_pods.go:89] "kube-apiserver-functional-793215" [463826be-1410-49ab-8085-c8a32cd268f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 19:46:29.209744   24166 system_pods.go:89] "kube-controller-manager-functional-793215" [55532c74-b9e5-4f24-abc2-0787bee5986d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 19:46:29.209748   24166 system_pods.go:89] "kube-proxy-s89lz" [3c4da607-cab2-45b2-b247-8462c6f11ec8] Running
	I1126 19:46:29.209753   24166 system_pods.go:89] "kube-scheduler-functional-793215" [70f58041-3fbe-4065-87ce-bb89198b5478] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 19:46:29.209755   24166 system_pods.go:89] "storage-provisioner" [cdfa27a0-3423-43ff-bdea-ded6c65fa201] Running
	I1126 19:46:29.209761   24166 system_pods.go:126] duration metric: took 2.546704ms to wait for k8s-apps to be running ...
	I1126 19:46:29.209767   24166 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 19:46:29.209826   24166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 19:46:29.228574   24166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 19:46:29.965455   24166 system_svc.go:56] duration metric: took 755.679716ms WaitForService to wait for kubelet
	I1126 19:46:29.965471   24166 kubeadm.go:587] duration metric: took 1.060948226s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 19:46:29.965491   24166 node_conditions.go:102] verifying NodePressure condition ...
	I1126 19:46:29.968459   24166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 19:46:29.968479   24166 node_conditions.go:123] node cpu capacity is 2
	I1126 19:46:29.968490   24166 node_conditions.go:105] duration metric: took 2.994982ms to run NodePressure ...
	I1126 19:46:29.968502   24166 start.go:242] waiting for startup goroutines ...
	I1126 19:46:29.977530   24166 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1126 19:46:29.980572   24166 addons.go:530] duration metric: took 1.075704554s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 19:46:29.980614   24166 start.go:247] waiting for cluster config update ...
	I1126 19:46:29.980638   24166 start.go:256] writing updated cluster config ...
	I1126 19:46:29.980951   24166 ssh_runner.go:195] Run: rm -f paused
	I1126 19:46:29.986316   24166 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 19:46:30.009258   24166 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-g24xc" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 19:46:32.015155   24166 pod_ready.go:104] pod "coredns-66bc5c9577-g24xc" is not "Ready", error: <nil>
	W1126 19:46:34.515113   24166 pod_ready.go:104] pod "coredns-66bc5c9577-g24xc" is not "Ready", error: <nil>
	I1126 19:46:35.515111   24166 pod_ready.go:94] pod "coredns-66bc5c9577-g24xc" is "Ready"
	I1126 19:46:35.515127   24166 pod_ready.go:86] duration metric: took 5.505853309s for pod "coredns-66bc5c9577-g24xc" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:46:35.517736   24166 pod_ready.go:83] waiting for pod "etcd-functional-793215" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 19:46:37.523960   24166 pod_ready.go:104] pod "etcd-functional-793215" is not "Ready", error: <nil>
	I1126 19:46:39.023629   24166 pod_ready.go:94] pod "etcd-functional-793215" is "Ready"
	I1126 19:46:39.023643   24166 pod_ready.go:86] duration metric: took 3.505895604s for pod "etcd-functional-793215" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:46:39.026105   24166 pod_ready.go:83] waiting for pod "kube-apiserver-functional-793215" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:46:40.044131   24166 pod_ready.go:94] pod "kube-apiserver-functional-793215" is "Ready"
	I1126 19:46:40.044148   24166 pod_ready.go:86] duration metric: took 1.018028245s for pod "kube-apiserver-functional-793215" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:46:40.049909   24166 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-793215" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:46:40.061341   24166 pod_ready.go:94] pod "kube-controller-manager-functional-793215" is "Ready"
	I1126 19:46:40.061360   24166 pod_ready.go:86] duration metric: took 11.393111ms for pod "kube-controller-manager-functional-793215" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:46:40.086915   24166 pod_ready.go:83] waiting for pod "kube-proxy-s89lz" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:46:40.108949   24166 pod_ready.go:94] pod "kube-proxy-s89lz" is "Ready"
	I1126 19:46:40.108965   24166 pod_ready.go:86] duration metric: took 22.03465ms for pod "kube-proxy-s89lz" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:46:40.220787   24166 pod_ready.go:83] waiting for pod "kube-scheduler-functional-793215" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:46:40.621604   24166 pod_ready.go:94] pod "kube-scheduler-functional-793215" is "Ready"
	I1126 19:46:40.621618   24166 pod_ready.go:86] duration metric: took 400.818663ms for pod "kube-scheduler-functional-793215" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 19:46:40.621629   24166 pod_ready.go:40] duration metric: took 10.635291062s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 19:46:40.673499   24166 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 19:46:40.676698   24166 out.go:179] * Done! kubectl is now configured to use "functional-793215" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 19:47:29 functional-793215 crio[3755]: time="2025-11-26T19:47:29.229196805Z" level=info msg="Created container 12cc5da02d01a314a9a63430aa7d38704717b0527f3083dfb0bfc9118047905d: default/sp-pod/myfrontend" id=13e4baf7-3185-439e-a103-8a04b07589dd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 19:47:29 functional-793215 crio[3755]: time="2025-11-26T19:47:29.23007916Z" level=info msg="Starting container: 12cc5da02d01a314a9a63430aa7d38704717b0527f3083dfb0bfc9118047905d" id=2175158c-6458-4c11-926a-065dfd36c370 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 19:47:29 functional-793215 crio[3755]: time="2025-11-26T19:47:29.231982799Z" level=info msg="Started container" PID=5483 containerID=12cc5da02d01a314a9a63430aa7d38704717b0527f3083dfb0bfc9118047905d description=default/sp-pod/myfrontend id=2175158c-6458-4c11-926a-065dfd36c370 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1c0590fd4b3543d6fced0670f155de7a0274eec4e99e5455b8e4a8c853b5fbd4
	Nov 26 19:47:35 functional-793215 crio[3755]: time="2025-11-26T19:47:35.575357277Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6553bce4-e5fc-4722-9ae1-5a3afabcacbc name=/runtime.v1.ImageService/PullImage
	Nov 26 19:47:35 functional-793215 crio[3755]: time="2025-11-26T19:47:35.665825527Z" level=info msg="Running pod sandbox: default/hello-node-75c85bcc94-d8245/POD" id=1e611301-a1ab-410e-9097-ed4581045fd5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 19:47:35 functional-793215 crio[3755]: time="2025-11-26T19:47:35.66590181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 19:47:35 functional-793215 crio[3755]: time="2025-11-26T19:47:35.671164356Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-d8245 Namespace:default ID:59bd1b48cd95be241cb7e023109ded5ed64d2f26e577fe599f63885b687a1948 UID:2e34f8ca-d785-4722-bfea-2a429ff804ee NetNS:/var/run/netns/9a63f1bb-eb9f-4a89-b57b-e56109abe0f5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d3e8}] Aliases:map[]}"
	Nov 26 19:47:35 functional-793215 crio[3755]: time="2025-11-26T19:47:35.67130432Z" level=info msg="Adding pod default_hello-node-75c85bcc94-d8245 to CNI network \"kindnet\" (type=ptp)"
	Nov 26 19:47:35 functional-793215 crio[3755]: time="2025-11-26T19:47:35.681492732Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-d8245 Namespace:default ID:59bd1b48cd95be241cb7e023109ded5ed64d2f26e577fe599f63885b687a1948 UID:2e34f8ca-d785-4722-bfea-2a429ff804ee NetNS:/var/run/netns/9a63f1bb-eb9f-4a89-b57b-e56109abe0f5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d3e8}] Aliases:map[]}"
	Nov 26 19:47:35 functional-793215 crio[3755]: time="2025-11-26T19:47:35.681652871Z" level=info msg="Checking pod default_hello-node-75c85bcc94-d8245 for CNI network kindnet (type=ptp)"
	Nov 26 19:47:35 functional-793215 crio[3755]: time="2025-11-26T19:47:35.684790963Z" level=info msg="Ran pod sandbox 59bd1b48cd95be241cb7e023109ded5ed64d2f26e577fe599f63885b687a1948 with infra container: default/hello-node-75c85bcc94-d8245/POD" id=1e611301-a1ab-410e-9097-ed4581045fd5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 19:47:35 functional-793215 crio[3755]: time="2025-11-26T19:47:35.688974388Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7ce3906c-6d22-4cb8-ae0f-772a1ea30846 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:47:48 functional-793215 crio[3755]: time="2025-11-26T19:47:48.573251927Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4f823071-8f83-4642-93a2-3de737b6220d name=/runtime.v1.ImageService/PullImage
	Nov 26 19:48:02 functional-793215 crio[3755]: time="2025-11-26T19:48:02.572740471Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f036654f-b59e-44a5-912c-9aca5a48b18b name=/runtime.v1.ImageService/PullImage
	Nov 26 19:48:15 functional-793215 crio[3755]: time="2025-11-26T19:48:15.573393653Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5ff1761b-6e72-4096-8051-6c8fcb2eb98a name=/runtime.v1.ImageService/PullImage
	Nov 26 19:48:21 functional-793215 crio[3755]: time="2025-11-26T19:48:21.742867386Z" level=info msg="Stopping pod sandbox: 8eaf1980cfe9f0e5ca7929b7b4d2ee756ae148cd2bcffe2ba230dc97739d66b5" id=52dfd9de-5d2f-45a4-bb71-4090225ab016 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 26 19:48:21 functional-793215 crio[3755]: time="2025-11-26T19:48:21.742920309Z" level=info msg="Stopped pod sandbox (already stopped): 8eaf1980cfe9f0e5ca7929b7b4d2ee756ae148cd2bcffe2ba230dc97739d66b5" id=52dfd9de-5d2f-45a4-bb71-4090225ab016 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 26 19:48:21 functional-793215 crio[3755]: time="2025-11-26T19:48:21.743749528Z" level=info msg="Removing pod sandbox: 8eaf1980cfe9f0e5ca7929b7b4d2ee756ae148cd2bcffe2ba230dc97739d66b5" id=b85cf692-f5d1-4f5e-9094-b0c59f3f00ca name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 26 19:48:21 functional-793215 crio[3755]: time="2025-11-26T19:48:21.747762543Z" level=info msg="Removed pod sandbox: 8eaf1980cfe9f0e5ca7929b7b4d2ee756ae148cd2bcffe2ba230dc97739d66b5" id=b85cf692-f5d1-4f5e-9094-b0c59f3f00ca name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 26 19:48:53 functional-793215 crio[3755]: time="2025-11-26T19:48:53.572670687Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3ae0b0ea-5d1d-4001-9d98-eaea1c803568 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:49:10 functional-793215 crio[3755]: time="2025-11-26T19:49:10.57370581Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ad53200b-c771-4980-8208-82b56752d435 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:50:19 functional-793215 crio[3755]: time="2025-11-26T19:50:19.574517886Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=370f35db-b825-41e6-94c1-daf42b82a196 name=/runtime.v1.ImageService/PullImage
	Nov 26 19:50:44 functional-793215 crio[3755]: time="2025-11-26T19:50:44.572736516Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ade78365-e6ef-43fd-91a5-f9c55826c9ad name=/runtime.v1.ImageService/PullImage
	Nov 26 19:53:07 functional-793215 crio[3755]: time="2025-11-26T19:53:07.57301025Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=297dcf43-6d34-49e4-b928-f579a0fc51ae name=/runtime.v1.ImageService/PullImage
	Nov 26 19:53:26 functional-793215 crio[3755]: time="2025-11-26T19:53:26.572740935Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2c2ebb47-8e30-491f-88ec-3748d41dc602 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	12cc5da02d01a       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712   9 minutes ago       Running             myfrontend                0                   1c0590fd4b354       sp-pod                                      default
	43e15f5be5dfd       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   daffb8c113e77       nginx-svc                                   default
	ec527c367cf48       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   989ae9d578b5b       kindnet-k9w7g                               kube-system
	0a26e2e6cfdce       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   0dd5b20da6a7b       kube-proxy-s89lz                            kube-system
	13e72d118b59a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   3                   6972a1f4331aa       coredns-66bc5c9577-g24xc                    kube-system
	5c36a49a34b06       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   0c90ce8e57910       storage-provisioner                         kube-system
	3c27fc25a2663       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   7e4b1f4e535ad       kube-apiserver-functional-793215            kube-system
	b0f713f30fff1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   193749da5ba4e       kube-scheduler-functional-793215            kube-system
	e7cd3e4a59a07       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   f82dcd0e59ece       kube-controller-manager-functional-793215   kube-system
	5b9a9bfa030b8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   9ad551828018d       etcd-functional-793215                      kube-system
	2743d5b0729a9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   2                   6972a1f4331aa       coredns-66bc5c9577-g24xc                    kube-system
	6e948b06b6718       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       2                   0c90ce8e57910       storage-provisioner                         kube-system
	ad7913dc9e160       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            2                   193749da5ba4e       kube-scheduler-functional-793215            kube-system
	2b8bec3e2ec77       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   2                   f82dcd0e59ece       kube-controller-manager-functional-793215   kube-system
	ede234b542a19       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      2                   9ad551828018d       etcd-functional-793215                      kube-system
	6a7655e3d1117       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                2                   0dd5b20da6a7b       kube-proxy-s89lz                            kube-system
	38200cea3b3ff       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               2                   989ae9d578b5b       kindnet-k9w7g                               kube-system
	
	
	==> coredns [13e72d118b59ab2e8e38a387504bfebcc68009e6dbd867543c8bb9971f0bf2f3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39918 - 8511 "HINFO IN 373300735593460123.1859706215629821212. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011978133s
	
	
	==> coredns [2743d5b0729a9cf8b3c65f8424eef4b22a31460f7b6f4f2121fe0199144c6bff] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44018 - 45967 "HINFO IN 3000310521443691245.7074603910804849283. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023071527s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-793215
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-793215
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=functional-793215
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T19_44_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:44:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-793215
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 19:57:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 19:56:59 +0000   Wed, 26 Nov 2025 19:44:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 19:56:59 +0000   Wed, 26 Nov 2025 19:44:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 19:56:59 +0000   Wed, 26 Nov 2025 19:44:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 19:56:59 +0000   Wed, 26 Nov 2025 19:45:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-793215
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                37e4cc1d-6742-45ca-a033-39013ad94b40
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-d8245                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  default                     hello-node-connect-7d85dfc575-ncgw7          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	  kube-system                 coredns-66bc5c9577-g24xc                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-793215                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-k9w7g                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-793215             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-793215    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-s89lz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-793215             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-793215 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-793215 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-793215 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-793215 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-793215 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-793215 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-793215 event: Registered Node functional-793215 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-793215 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-793215 event: Registered Node functional-793215 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-793215 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-793215 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-793215 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-793215 event: Registered Node functional-793215 in Controller
	
	
	==> dmesg <==
	[Nov26 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014220] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507172] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032749] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.773464] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.697672] kauditd_printk_skb: 36 callbacks suppressed
	[Nov26 19:37] overlayfs: idmapped layers are currently not supported
	[  +0.074077] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov26 19:39] hrtimer: interrupt took 16123050 ns
	[Nov26 19:43] overlayfs: idmapped layers are currently not supported
	[Nov26 19:44] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5b9a9bfa030b89cf467ec5c4ab56d65be7ce4f4ab6cdcf9a066811b8317be046] <==
	{"level":"warn","ts":"2025-11-26T19:46:25.132472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.156763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.173766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.201602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.222577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.237623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.293610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.297383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.317274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.330189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.350541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.368631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.384274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.413508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.431342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.449269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.467423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.484852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.534506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.582773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.618263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:46:25.678070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46196","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-26T19:56:24.304541Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1157}
	{"level":"info","ts":"2025-11-26T19:56:24.330896Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1157,"took":"25.867075ms","hash":1933221542,"current-db-size-bytes":3325952,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1515520,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-26T19:56:24.330964Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1933221542,"revision":1157,"compact-revision":-1}
	
	
	==> etcd [ede234b542a19f1aa9c02681ee1d5fbe6c57037a9260293c3b1852c2916f145c] <==
	{"level":"warn","ts":"2025-11-26T19:45:41.692686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:45:41.729541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:45:41.769675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:45:41.818056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:45:41.835490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:45:41.863658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T19:45:42.018986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54410","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-26T19:46:09.072814Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-26T19:46:09.072873Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-793215","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-26T19:46:09.072964Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-26T19:46:09.213430Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-26T19:46:09.214932Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T19:46:09.215023Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-26T19:46:09.215008Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-26T19:46:09.215101Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-26T19:46:09.215129Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-11-26T19:46:09.215138Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-26T19:46:09.215087Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-26T19:46:09.215165Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-26T19:46:09.215173Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T19:46:09.215116Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-26T19:46:09.218913Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-26T19:46:09.218991Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T19:46:09.219040Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-26T19:46:09.219053Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-793215","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 19:57:02 up 39 min,  0 user,  load average: 0.19, 0.32, 0.48
	Linux functional-793215 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [38200cea3b3ff544ee0426a24f7a6f86fbabbe946f9559f929dfc5706fd07aa0] <==
	I1126 19:45:38.634400       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 19:45:38.634594       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1126 19:45:38.634705       1 main.go:148] setting mtu 1500 for CNI 
	I1126 19:45:38.634716       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 19:45:38.634729       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T19:45:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 19:45:38.885881       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 19:45:38.885993       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 19:45:38.886027       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 19:45:38.890816       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 19:45:43.102305       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 19:45:43.102462       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 19:45:43.102580       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 19:45:43.102694       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1126 19:45:44.790388       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 19:45:44.790419       1 metrics.go:72] Registering metrics
	I1126 19:45:44.790490       1 controller.go:711] "Syncing nftables rules"
	I1126 19:45:48.885404       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:45:48.885479       1 main.go:301] handling current node
	I1126 19:45:58.886124       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:45:58.886158       1 main.go:301] handling current node
	I1126 19:46:08.888117       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:46:08.888155       1 main.go:301] handling current node
	
	
	==> kindnet [ec527c367cf4858406394f71cef4601a79a9d5e13518f669d7253cc834b92380] <==
	I1126 19:54:58.329563       1 main.go:301] handling current node
	I1126 19:55:08.330490       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:55:08.330598       1 main.go:301] handling current node
	I1126 19:55:18.334016       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:55:18.334052       1 main.go:301] handling current node
	I1126 19:55:28.331649       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:55:28.331749       1 main.go:301] handling current node
	I1126 19:55:38.333995       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:55:38.334029       1 main.go:301] handling current node
	I1126 19:55:48.335353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:55:48.335387       1 main.go:301] handling current node
	I1126 19:55:58.329998       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:55:58.330033       1 main.go:301] handling current node
	I1126 19:56:08.338002       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:56:08.338036       1 main.go:301] handling current node
	I1126 19:56:18.333217       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:56:18.333249       1 main.go:301] handling current node
	I1126 19:56:28.329980       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:56:28.330083       1 main.go:301] handling current node
	I1126 19:56:38.333224       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:56:38.333262       1 main.go:301] handling current node
	I1126 19:56:48.333490       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:56:48.333595       1 main.go:301] handling current node
	I1126 19:56:58.331304       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 19:56:58.331339       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3c27fc25a2663dfe14925325776002f6baf301cb7f4116d8d4727dee04e7b80d] <==
	I1126 19:46:26.732048       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 19:46:26.732055       1 cache.go:39] Caches are synced for autoregister controller
	I1126 19:46:26.746567       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 19:46:26.749409       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1126 19:46:26.753005       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 19:46:26.761473       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1126 19:46:26.761569       1 policy_source.go:240] refreshing policies
	I1126 19:46:26.765731       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 19:46:26.772293       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 19:46:27.427159       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 19:46:27.584504       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 19:46:28.583203       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 19:46:28.718032       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 19:46:28.862733       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 19:46:28.875366       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 19:46:30.130668       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 19:46:30.433545       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 19:46:30.480099       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 19:46:43.969129       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.138.153"}
	I1126 19:46:49.186126       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.37.16"}
	I1126 19:46:59.717375       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.187.49"}
	E1126 19:47:27.249318       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:52652: use of closed network connection
	E1126 19:47:35.234340       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45520: use of closed network connection
	I1126 19:47:35.457532       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.55.76"}
	I1126 19:56:26.676212       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [2b8bec3e2ec779fc901e3d90160f2e4e0448cc36e348eac02fcb532474dcbee1] <==
	I1126 19:45:46.251891       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 19:45:46.254106       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 19:45:46.254418       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:45:46.256559       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 19:45:46.258941       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 19:45:46.261192       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 19:45:46.264265       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 19:45:46.267218       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1126 19:45:46.269704       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 19:45:46.288021       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 19:45:46.288066       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 19:45:46.288143       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 19:45:46.288467       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 19:45:46.288576       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 19:45:46.288613       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 19:45:46.288753       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 19:45:46.288787       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1126 19:45:46.288875       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-793215"
	I1126 19:45:46.288915       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1126 19:45:46.292939       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 19:45:46.293096       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 19:45:46.294275       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:45:46.297617       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 19:45:46.300898       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 19:45:46.318136       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-controller-manager [e7cd3e4a59a076f02368c23a97618df78418e4fb1461462d057ab2566730bbb6] <==
	I1126 19:46:30.083717       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:46:30.083762       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 19:46:30.083873       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1126 19:46:30.083961       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1126 19:46:30.084049       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1126 19:46:30.084101       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1126 19:46:30.084336       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 19:46:30.090555       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 19:46:30.095384       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 19:46:30.110734       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1126 19:46:30.110887       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 19:46:30.122461       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 19:46:30.123585       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 19:46:30.123651       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 19:46:30.123688       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 19:46:30.123725       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 19:46:30.123780       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 19:46:30.123838       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 19:46:30.125784       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 19:46:30.130435       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 19:46:30.132865       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 19:46:30.134110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 19:46:30.134171       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 19:46:30.134185       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 19:46:30.142024       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	
	
	==> kube-proxy [0a26e2e6cfdcee0e03c6ddc1d30d00bbabde8de804c49e289ca657774ae1a9e6] <==
	I1126 19:46:28.008602       1 server_linux.go:53] "Using iptables proxy"
	I1126 19:46:28.099854       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 19:46:28.200854       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 19:46:28.200896       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1126 19:46:28.200998       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 19:46:28.230721       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 19:46:28.232938       1 server_linux.go:132] "Using iptables Proxier"
	I1126 19:46:28.254184       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 19:46:28.254459       1 server.go:527] "Version info" version="v1.34.1"
	I1126 19:46:28.254481       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 19:46:28.257990       1 config.go:200] "Starting service config controller"
	I1126 19:46:28.258015       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 19:46:28.258465       1 config.go:106] "Starting endpoint slice config controller"
	I1126 19:46:28.258480       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 19:46:28.258500       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 19:46:28.258505       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 19:46:28.259208       1 config.go:309] "Starting node config controller"
	I1126 19:46:28.259229       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 19:46:28.259235       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 19:46:28.358478       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 19:46:28.358578       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 19:46:28.358591       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [6a7655e3d11173fa378779756eb939bde8cea151854024de671595e3f9e4bed4] <==
	I1126 19:45:38.621579       1 server_linux.go:53] "Using iptables proxy"
	I1126 19:45:40.071263       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1126 19:45:43.146337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-793215\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1126 19:45:44.038979       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 19:45:44.039018       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1126 19:45:44.039102       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 19:45:44.061365       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 19:45:44.061430       1 server_linux.go:132] "Using iptables Proxier"
	I1126 19:45:44.065602       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 19:45:44.066205       1 server.go:527] "Version info" version="v1.34.1"
	I1126 19:45:44.066271       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 19:45:44.068795       1 config.go:106] "Starting endpoint slice config controller"
	I1126 19:45:44.068822       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 19:45:44.069107       1 config.go:200] "Starting service config controller"
	I1126 19:45:44.069124       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 19:45:44.069459       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 19:45:44.069472       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 19:45:44.069913       1 config.go:309] "Starting node config controller"
	I1126 19:45:44.070065       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 19:45:44.070078       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 19:45:44.169392       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 19:45:44.169519       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 19:45:44.169401       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ad7913dc9e1607d20e0cbd6d3d44ef340579bf65174cd3b27035835a8946e30b] <==
	E1126 19:45:43.110648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 19:45:43.110841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 19:45:43.110926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 19:45:43.111003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 19:45:43.111153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 19:45:43.111252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 19:45:43.111362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 19:45:43.111463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 19:45:43.111599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 19:45:43.111725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 19:45:43.126459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1126 19:45:43.149738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 19:45:43.154206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 19:45:43.154573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 19:45:43.154676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 19:45:43.154783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 19:45:43.154881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 19:45:43.155236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1126 19:45:44.129593       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 19:46:09.067349       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1126 19:46:09.067367       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1126 19:46:09.067392       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1126 19:46:09.067434       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 19:46:09.067598       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1126 19:46:09.067614       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b0f713f30fff161ca0919bf11472e4174bd95b29111c4924f5f656f9d12628a0] <==
	I1126 19:46:24.768021       1 serving.go:386] Generated self-signed cert in-memory
	W1126 19:46:26.641942       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 19:46:26.642050       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 19:46:26.642086       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 19:46:26.642133       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 19:46:26.685791       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 19:46:26.685914       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 19:46:26.691837       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 19:46:26.691914       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 19:46:26.691931       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 19:46:26.691957       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 19:46:26.792051       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 19:54:19 functional-793215 kubelet[4075]: E1126 19:54:19.573234    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	Nov 26 19:54:27 functional-793215 kubelet[4075]: E1126 19:54:27.572758    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d8245" podUID="2e34f8ca-d785-4722-bfea-2a429ff804ee"
	Nov 26 19:54:34 functional-793215 kubelet[4075]: E1126 19:54:34.572562    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	Nov 26 19:54:38 functional-793215 kubelet[4075]: E1126 19:54:38.572673    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d8245" podUID="2e34f8ca-d785-4722-bfea-2a429ff804ee"
	Nov 26 19:54:47 functional-793215 kubelet[4075]: E1126 19:54:47.573475    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	Nov 26 19:54:52 functional-793215 kubelet[4075]: E1126 19:54:52.573044    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d8245" podUID="2e34f8ca-d785-4722-bfea-2a429ff804ee"
	Nov 26 19:55:02 functional-793215 kubelet[4075]: E1126 19:55:02.572712    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	Nov 26 19:55:03 functional-793215 kubelet[4075]: E1126 19:55:03.572478    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d8245" podUID="2e34f8ca-d785-4722-bfea-2a429ff804ee"
	Nov 26 19:55:17 functional-793215 kubelet[4075]: E1126 19:55:17.573471    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d8245" podUID="2e34f8ca-d785-4722-bfea-2a429ff804ee"
	Nov 26 19:55:17 functional-793215 kubelet[4075]: E1126 19:55:17.574482    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	Nov 26 19:55:29 functional-793215 kubelet[4075]: E1126 19:55:29.572909    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	Nov 26 19:55:32 functional-793215 kubelet[4075]: E1126 19:55:32.572779    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d8245" podUID="2e34f8ca-d785-4722-bfea-2a429ff804ee"
	Nov 26 19:55:40 functional-793215 kubelet[4075]: E1126 19:55:40.573120    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	Nov 26 19:55:43 functional-793215 kubelet[4075]: E1126 19:55:43.573328    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d8245" podUID="2e34f8ca-d785-4722-bfea-2a429ff804ee"
	Nov 26 19:55:55 functional-793215 kubelet[4075]: E1126 19:55:55.572889    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	Nov 26 19:55:56 functional-793215 kubelet[4075]: E1126 19:55:56.573080    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d8245" podUID="2e34f8ca-d785-4722-bfea-2a429ff804ee"
	Nov 26 19:56:09 functional-793215 kubelet[4075]: E1126 19:56:09.573150    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d8245" podUID="2e34f8ca-d785-4722-bfea-2a429ff804ee"
	Nov 26 19:56:10 functional-793215 kubelet[4075]: E1126 19:56:10.572536    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	Nov 26 19:56:23 functional-793215 kubelet[4075]: E1126 19:56:23.573227    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d8245" podUID="2e34f8ca-d785-4722-bfea-2a429ff804ee"
	Nov 26 19:56:24 functional-793215 kubelet[4075]: E1126 19:56:24.573058    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	Nov 26 19:56:36 functional-793215 kubelet[4075]: E1126 19:56:36.572944    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	Nov 26 19:56:37 functional-793215 kubelet[4075]: E1126 19:56:37.573116    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d8245" podUID="2e34f8ca-d785-4722-bfea-2a429ff804ee"
	Nov 26 19:56:50 functional-793215 kubelet[4075]: E1126 19:56:50.572730    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	Nov 26 19:56:50 functional-793215 kubelet[4075]: E1126 19:56:50.572731    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-d8245" podUID="2e34f8ca-d785-4722-bfea-2a429ff804ee"
	Nov 26 19:57:01 functional-793215 kubelet[4075]: E1126 19:57:01.573836    4075 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-ncgw7" podUID="f6519b52-f309-4387-b608-494ae623ee3f"
	
	
	==> storage-provisioner [5c36a49a34b06155dcdb1b8980fff54aaed462918a0bbd72216501ec3ed52f19] <==
	W1126 19:56:38.112178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:40.115493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:40.120709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:42.124956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:42.130648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:44.133424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:44.140003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:46.143414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:46.147660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:48.150473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:48.154807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:50.157411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:50.164362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:52.168165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:52.172708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:54.175329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:54.179696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:56.183106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:56.189528       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:58.192713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:56:58.197316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:57:00.217901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:57:00.248280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:57:02.252571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:57:02.260193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6e948b06b6718505894e442e1e49f2b691b7b1db860ea392889d2edf82f01de6] <==
	I1126 19:45:41.391316       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 19:45:43.175545       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 19:45:43.175680       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 19:45:43.185980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:45:46.644905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:45:50.905279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:45:54.509715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:45:57.564144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:46:00.589974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:46:00.597357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 19:46:00.597519       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 19:46:00.600393       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"841fe031-02d9-4bcc-9659-ab65e78f8799", APIVersion:"v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-793215_28a962ac-cdf0-450f-ae5e-479eb6933d23 became leader
	I1126 19:46:00.600595       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-793215_28a962ac-cdf0-450f-ae5e-479eb6933d23!
	W1126 19:46:00.608705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:46:00.614806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 19:46:00.701764       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-793215_28a962ac-cdf0-450f-ae5e-479eb6933d23!
	W1126 19:46:02.618518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:46:02.625626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:46:04.628569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:46:04.635704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:46:06.639825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:46:06.644639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:46:08.649569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 19:46:08.657048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-793215 -n functional-793215
helpers_test.go:269: (dbg) Run:  kubectl --context functional-793215 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-d8245 hello-node-connect-7d85dfc575-ncgw7
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-793215 describe pod hello-node-75c85bcc94-d8245 hello-node-connect-7d85dfc575-ncgw7
helpers_test.go:290: (dbg) kubectl --context functional-793215 describe pod hello-node-75c85bcc94-d8245 hello-node-connect-7d85dfc575-ncgw7:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-d8245
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-793215/192.168.49.2
	Start Time:       Wed, 26 Nov 2025 19:47:35 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hqm6z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hqm6z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m28s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-d8245 to functional-793215
	  Normal   Pulling    6m19s (x5 over 9m28s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m19s (x5 over 9m28s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m19s (x5 over 9m28s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m25s (x20 over 9m28s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x21 over 9m28s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-ncgw7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-793215/192.168.49.2
	Start Time:       Wed, 26 Nov 2025 19:46:59 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qj6j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4qj6j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-ncgw7 to functional-793215
	  Normal   Pulling    6m44s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m44s (x5 over 9m42s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m44s (x5 over 9m42s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m34s (x21 over 9m42s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2s (x41 over 9m42s)     kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-793215 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-793215 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-d8245" [2e34f8ca-d785-4722-bfea-2a429ff804ee] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1126 19:49:28.112010    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:49:55.813592    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:54:28.112425    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-793215 -n functional-793215
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-26 19:57:35.901225598 +0000 UTC m=+1327.756839670
functional_test.go:1460: (dbg) Run:  kubectl --context functional-793215 describe po hello-node-75c85bcc94-d8245 -n default
functional_test.go:1460: (dbg) kubectl --context functional-793215 describe po hello-node-75c85bcc94-d8245 -n default:
Name:             hello-node-75c85bcc94-d8245
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-793215/192.168.49.2
Start Time:       Wed, 26 Nov 2025 19:47:35 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hqm6z (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hqm6z:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-d8245 to functional-793215
Normal   Pulling    6m52s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m52s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m52s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-793215 logs hello-node-75c85bcc94-d8245 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-793215 logs hello-node-75c85bcc94-d8245 -n default: exit status 1 (143.462575ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-d8245" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-793215 logs hello-node-75c85bcc94-d8245 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image load --daemon kicbase/echo-server:functional-793215 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-793215" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image load --daemon kicbase/echo-server:functional-793215 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-793215" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-793215
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image load --daemon kicbase/echo-server:functional-793215 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-793215" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image save kicbase/echo-server:functional-793215 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1126 19:57:31.460553   30721 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:57:31.460749   30721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:57:31.460781   30721 out.go:374] Setting ErrFile to fd 2...
	I1126 19:57:31.460802   30721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:57:31.461064   30721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:57:31.461689   30721 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:57:31.461853   30721 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:57:31.462426   30721 cli_runner.go:164] Run: docker container inspect functional-793215 --format={{.State.Status}}
	I1126 19:57:31.480829   30721 ssh_runner.go:195] Run: systemctl --version
	I1126 19:57:31.480913   30721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
	I1126 19:57:31.501374   30721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
	I1126 19:57:31.616565   30721 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1126 19:57:31.616627   30721 cache_images.go:255] Failed to load cached images for "functional-793215": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1126 19:57:31.616653   30721 cache_images.go:267] failed pushing to: functional-793215

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-793215
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image save --daemon kicbase/echo-server:functional-793215 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-793215
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-793215: exit status 1 (18.318722ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-793215

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-793215

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 service --namespace=default --https --url hello-node: exit status 115 (387.669674ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32659
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-793215 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 service hello-node --url --format={{.IP}}: exit status 115 (386.170711ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-793215 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 service hello-node --url: exit status 115 (442.983188ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32659
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-793215 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32659
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (478.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1126 20:06:48.593179    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:07:16.302263    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:09:28.112992    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:11:48.593224    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-278127 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 105 (7m52.590663565s)

                                                
                                                
-- stdout --
	* [ha-278127] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-278127" primary control-plane node in "ha-278127" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	* Enabled addons: 
	
	* Starting "ha-278127-m02" control-plane node in "ha-278127" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:06:24.854734   59960 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:06:24.854900   59960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:06:24.854911   59960 out.go:374] Setting ErrFile to fd 2...
	I1126 20:06:24.854917   59960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:06:24.855178   59960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:06:24.855529   59960 out.go:368] Setting JSON to false
	I1126 20:06:24.856339   59960 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2915,"bootTime":1764184670,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:06:24.856415   59960 start.go:143] virtualization:  
	I1126 20:06:24.859567   59960 out.go:179] * [ha-278127] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:06:24.863328   59960 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:06:24.863432   59960 notify.go:221] Checking for updates...
	I1126 20:06:24.869239   59960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:06:24.872146   59960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:24.874915   59960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:06:24.877742   59960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:06:24.880612   59960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:06:24.883943   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:24.884479   59960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:06:24.917824   59960 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:06:24.917967   59960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:06:24.982581   59960 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-26 20:06:24.973603153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:06:24.982686   59960 docker.go:319] overlay module found
	I1126 20:06:24.986072   59960 out.go:179] * Using the docker driver based on existing profile
	I1126 20:06:24.989065   59960 start.go:309] selected driver: docker
	I1126 20:06:24.989102   59960 start.go:927] validating driver "docker" against &{Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:24.989232   59960 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:06:24.989341   59960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:06:25.048426   59960 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-26 20:06:25.038525674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:06:25.048890   59960 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:06:25.048924   59960 cni.go:84] Creating CNI manager for ""
	I1126 20:06:25.048991   59960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1126 20:06:25.049039   59960 start.go:353] cluster config:
	{Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:25.052236   59960 out.go:179] * Starting "ha-278127" primary control-plane node in "ha-278127" cluster
	I1126 20:06:25.055057   59960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:06:25.058039   59960 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:06:25.061008   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:25.061089   59960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:06:25.061106   59960 cache.go:65] Caching tarball of preloaded images
	I1126 20:06:25.061005   59960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:06:25.061198   59960 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:06:25.061210   59960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:06:25.061353   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:25.080808   59960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:06:25.080831   59960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:06:25.080846   59960 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:06:25.080876   59960 start.go:360] acquireMachinesLock for ha-278127: {Name:mkb106a4eb425a1b9d0e59976741b3f940666d17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:06:25.080933   59960 start.go:364] duration metric: took 35.659µs to acquireMachinesLock for "ha-278127"
	I1126 20:06:25.080951   59960 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:06:25.080956   59960 fix.go:54] fixHost starting: 
	I1126 20:06:25.081217   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:06:25.097737   59960 fix.go:112] recreateIfNeeded on ha-278127: state=Stopped err=<nil>
	W1126 20:06:25.097772   59960 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:06:25.101061   59960 out.go:252] * Restarting existing docker container for "ha-278127" ...
	I1126 20:06:25.101155   59960 cli_runner.go:164] Run: docker start ha-278127
	I1126 20:06:25.385420   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:06:25.411970   59960 kic.go:430] container "ha-278127" state is running.
	I1126 20:06:25.412392   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:25.431941   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:25.432192   59960 machine.go:94] provisionDockerMachine start ...
	I1126 20:06:25.432251   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:25.452939   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:25.453252   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:25.453261   59960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:06:25.454097   59960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44664->127.0.0.1:32828: read: connection reset by peer
	I1126 20:06:28.605461   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127
	
	I1126 20:06:28.605490   59960 ubuntu.go:182] provisioning hostname "ha-278127"
	I1126 20:06:28.605558   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:28.623455   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:28.623769   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:28.623786   59960 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-278127 && echo "ha-278127" | sudo tee /etc/hostname
	I1126 20:06:28.778155   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127
	
	I1126 20:06:28.778256   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:28.794949   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:28.795250   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:28.795271   59960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-278127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-278127/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-278127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:06:28.942212   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:06:28.942238   59960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:06:28.942272   59960 ubuntu.go:190] setting up certificates
	I1126 20:06:28.942281   59960 provision.go:84] configureAuth start
	I1126 20:06:28.942355   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:28.960559   59960 provision.go:143] copyHostCerts
	I1126 20:06:28.960617   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:28.960653   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:06:28.960666   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:28.960744   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:06:28.960844   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:28.960866   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:06:28.960877   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:28.960906   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:06:28.960964   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:28.960985   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:06:28.960993   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:28.961023   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:06:28.961088   59960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.ha-278127 san=[127.0.0.1 192.168.49.2 ha-278127 localhost minikube]
	I1126 20:06:29.153972   59960 provision.go:177] copyRemoteCerts
	I1126 20:06:29.154049   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:06:29.154092   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.171236   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.273352   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1126 20:06:29.273420   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:06:29.290237   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1126 20:06:29.290299   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1126 20:06:29.307794   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1126 20:06:29.307855   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:06:29.325356   59960 provision.go:87] duration metric: took 383.045342ms to configureAuth
	I1126 20:06:29.325387   59960 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:06:29.325626   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:29.325742   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.342790   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:29.343103   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:29.343131   59960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:06:29.721722   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:06:29.721744   59960 machine.go:97] duration metric: took 4.28954331s to provisionDockerMachine
	I1126 20:06:29.721770   59960 start.go:293] postStartSetup for "ha-278127" (driver="docker")
	I1126 20:06:29.721791   59960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:06:29.721855   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:06:29.721907   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.742288   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.845365   59960 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:06:29.848307   59960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:06:29.848344   59960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:06:29.848355   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:06:29.848405   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:06:29.848509   59960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:06:29.848521   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /etc/ssl/certs/41292.pem
	I1126 20:06:29.848614   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:06:29.855777   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:29.872505   59960 start.go:296] duration metric: took 150.71913ms for postStartSetup
	I1126 20:06:29.872582   59960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:06:29.872629   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.889019   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.990934   59960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:06:29.995268   59960 fix.go:56] duration metric: took 4.914304894s for fixHost
	I1126 20:06:29.995338   59960 start.go:83] releasing machines lock for "ha-278127", held for 4.914396494s
	I1126 20:06:29.995443   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:30.012377   59960 ssh_runner.go:195] Run: cat /version.json
	I1126 20:06:30.012396   59960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:06:30.012433   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:30.012448   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:30.031079   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:30.032530   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:30.145909   59960 ssh_runner.go:195] Run: systemctl --version
	I1126 20:06:30.239511   59960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:06:30.276317   59960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:06:30.280821   59960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:06:30.280919   59960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:06:30.288826   59960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:06:30.288852   59960 start.go:496] detecting cgroup driver to use...
	I1126 20:06:30.288908   59960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:06:30.288973   59960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:06:30.304277   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:06:30.316900   59960 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:06:30.316968   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:06:30.332722   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:06:30.345857   59960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:06:30.458910   59960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:06:30.568914   59960 docker.go:234] disabling docker service ...
	I1126 20:06:30.568992   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:06:30.584111   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:06:30.596826   59960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:06:30.712581   59960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:06:30.831709   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:06:30.843921   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:06:30.857895   59960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:06:30.858007   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.867693   59960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:06:30.867809   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.876639   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.885174   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.893801   59960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:06:30.901606   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.910405   59960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.918408   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.927292   59960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:06:30.934726   59960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:06:30.941996   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:31.058637   59960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:06:31.242820   59960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:06:31.242889   59960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:06:31.246945   59960 start.go:564] Will wait 60s for crictl version
	I1126 20:06:31.247023   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:06:31.250523   59960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:06:31.274233   59960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:06:31.274317   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:06:31.302783   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:06:31.335292   59960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:06:31.338152   59960 cli_runner.go:164] Run: docker network inspect ha-278127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:06:31.354467   59960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 20:06:31.358251   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:06:31.368693   59960 kubeadm.go:884] updating cluster {Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:06:31.368839   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:31.368891   59960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:06:31.403727   59960 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:06:31.403752   59960 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:06:31.404010   59960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:06:31.431423   59960 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:06:31.431446   59960 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:06:31.431457   59960 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1126 20:06:31.431560   59960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-278127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:06:31.431642   59960 ssh_runner.go:195] Run: crio config
	I1126 20:06:31.500147   59960 cni.go:84] Creating CNI manager for ""
	I1126 20:06:31.500186   59960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1126 20:06:31.500211   59960 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:06:31.500236   59960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-278127 NodeName:ha-278127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:06:31.500354   59960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-278127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:06:31.500372   59960 kube-vip.go:115] generating kube-vip config ...
	I1126 20:06:31.500428   59960 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1126 20:06:31.512046   59960 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:06:31.512210   59960 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1126 20:06:31.512299   59960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:06:31.519877   59960 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:06:31.519973   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1126 20:06:31.527497   59960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1126 20:06:31.540828   59960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:06:31.553623   59960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1126 20:06:31.566105   59960 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1126 20:06:31.578838   59960 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1126 20:06:31.582461   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:06:31.592186   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:31.707439   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:06:31.722268   59960 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127 for IP: 192.168.49.2
	I1126 20:06:31.722291   59960 certs.go:195] generating shared ca certs ...
	I1126 20:06:31.722307   59960 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:31.722445   59960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:06:31.722497   59960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:06:31.722508   59960 certs.go:257] generating profile certs ...
	I1126 20:06:31.722593   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key
	I1126 20:06:31.722624   59960 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab
	I1126 20:06:31.722643   59960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1126 20:06:32.010576   59960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab ...
	I1126 20:06:32.010610   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab: {Name:mk952cf244227c47330a0f303648b46942398499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.010819   59960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab ...
	I1126 20:06:32.010835   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab: {Name:mk44577b028f8c1bee471863ff089cc458df619d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.010930   59960 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt
	I1126 20:06:32.011078   59960 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key
	I1126 20:06:32.011225   59960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key
	I1126 20:06:32.011244   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1126 20:06:32.011263   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1126 20:06:32.011280   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1126 20:06:32.011297   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1126 20:06:32.011315   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1126 20:06:32.011331   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1126 20:06:32.011348   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1126 20:06:32.011362   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1126 20:06:32.011414   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:06:32.011456   59960 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:06:32.011469   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:06:32.011501   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:06:32.011530   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:06:32.011558   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:06:32.011608   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:32.011640   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.011656   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.011666   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem -> /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.012331   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:06:32.032881   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:06:32.054562   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:06:32.072828   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:06:32.091195   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:06:32.109160   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:06:32.126721   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:06:32.143729   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:06:32.162210   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:06:32.179022   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:06:32.196402   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:06:32.213770   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:06:32.227414   59960 ssh_runner.go:195] Run: openssl version
	I1126 20:06:32.233654   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:06:32.243718   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.247376   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.247448   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.289532   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:06:32.297668   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:06:32.306080   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.309793   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.309880   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.353652   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:06:32.364544   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:06:32.373430   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.381651   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.381803   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.434961   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:06:32.448704   59960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:06:32.454552   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:06:32.518905   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:06:32.599420   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:06:32.673604   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:06:32.734602   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:06:32.794948   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:06:32.842245   59960 kubeadm.go:401] StartCluster: {Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:32.842417   59960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:06:32.842512   59960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:06:32.887488   59960 cri.go:89] found id: "f5647f1652cc11a195a49a98906391e791c3136916a5e3c249907585088fad42"
	I1126 20:06:32.887548   59960 cri.go:89] found id: "1ed2c42e7047cc402ab04fdadafa16acc5208b12eede0475826c97d34c9a071f"
	I1126 20:06:32.887577   59960 cri.go:89] found id: "040a8549001808f2d3fce3d4cf9f8dff272706173960c5e8004af8b1ea042e80"
	I1126 20:06:32.887595   59960 cri.go:89] found id: "106da3c0ad4fa03ae491f571375cda1a123fe52e6f7ef39170a84c273267c713"
	I1126 20:06:32.887614   59960 cri.go:89] found id: "cdc1651fea8f10bd665928dcc7bb174b74385eb06e911da9629df17c0d9d29e8"
	I1126 20:06:32.887650   59960 cri.go:89] found id: ""
	I1126 20:06:32.887728   59960 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:06:32.910884   59960 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:06:32Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:06:32.911021   59960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:06:32.933474   59960 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:06:32.933554   59960 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:06:32.933631   59960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:06:32.956246   59960 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:06:32.956760   59960 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-278127" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:32.956919   59960 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-2326/kubeconfig needs updating (will repair): [kubeconfig missing "ha-278127" cluster setting kubeconfig missing "ha-278127" context setting]
	I1126 20:06:32.957299   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.957946   59960 kapi.go:59] client config for ha-278127: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:06:32.958772   59960 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1126 20:06:32.958857   59960 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1126 20:06:32.958878   59960 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1126 20:06:32.958921   59960 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1126 20:06:32.958940   59960 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1126 20:06:32.958837   59960 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1126 20:06:32.959354   59960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:06:32.974056   59960 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1126 20:06:32.974125   59960 kubeadm.go:602] duration metric: took 40.551528ms to restartPrimaryControlPlane
	I1126 20:06:32.974150   59960 kubeadm.go:403] duration metric: took 131.91251ms to StartCluster
	I1126 20:06:32.974180   59960 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.974282   59960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:32.974978   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.975243   59960 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:06:32.975297   59960 start.go:242] waiting for startup goroutines ...
	I1126 20:06:32.975325   59960 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:06:32.975918   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:32.981231   59960 out.go:179] * Enabled addons: 
	I1126 20:06:32.984100   59960 addons.go:530] duration metric: took 8.777007ms for enable addons: enabled=[]
	I1126 20:06:32.984180   59960 start.go:247] waiting for cluster config update ...
	I1126 20:06:32.984203   59960 start.go:256] writing updated cluster config ...
	I1126 20:06:32.987492   59960 out.go:203] 
	I1126 20:06:32.990613   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:32.990800   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:32.994017   59960 out.go:179] * Starting "ha-278127-m02" control-plane node in "ha-278127" cluster
	I1126 20:06:32.996802   59960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:06:32.999792   59960 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:06:33.002700   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:33.002740   59960 cache.go:65] Caching tarball of preloaded images
	I1126 20:06:33.002860   59960 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:06:33.002893   59960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:06:33.003031   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:33.003254   59960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:06:33.039303   59960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:06:33.039323   59960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:06:33.039336   59960 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:06:33.039360   59960 start.go:360] acquireMachinesLock for ha-278127-m02: {Name:mkfa715e07e067116cf6c4854164186af5a39436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:06:33.039417   59960 start.go:364] duration metric: took 41.518µs to acquireMachinesLock for "ha-278127-m02"
	I1126 20:06:33.039439   59960 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:06:33.039445   59960 fix.go:54] fixHost starting: m02
	I1126 20:06:33.039721   59960 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:06:33.071417   59960 fix.go:112] recreateIfNeeded on ha-278127-m02: state=Stopped err=<nil>
	W1126 20:06:33.071449   59960 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:06:33.074580   59960 out.go:252] * Restarting existing docker container for "ha-278127-m02" ...
	I1126 20:06:33.074664   59960 cli_runner.go:164] Run: docker start ha-278127-m02
	I1126 20:06:33.452368   59960 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:06:33.483474   59960 kic.go:430] container "ha-278127-m02" state is running.
	I1126 20:06:33.483869   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:33.512602   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:33.512851   59960 machine.go:94] provisionDockerMachine start ...
	I1126 20:06:33.512917   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:33.539611   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:33.539907   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:33.539915   59960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:06:33.540557   59960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35216->127.0.0.1:32833: read: connection reset by peer
	I1126 20:06:36.755151   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127-m02
	
	I1126 20:06:36.755173   59960 ubuntu.go:182] provisioning hostname "ha-278127-m02"
	I1126 20:06:36.755238   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:36.783610   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:36.783923   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:36.783950   59960 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-278127-m02 && echo "ha-278127-m02" | sudo tee /etc/hostname
	I1126 20:06:37.026368   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127-m02
	
	I1126 20:06:37.026488   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:37.056257   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:37.056574   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:37.056592   59960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-278127-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-278127-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-278127-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:06:37.278605   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:06:37.278692   59960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:06:37.278724   59960 ubuntu.go:190] setting up certificates
	I1126 20:06:37.278764   59960 provision.go:84] configureAuth start
	I1126 20:06:37.278849   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:37.306165   59960 provision.go:143] copyHostCerts
	I1126 20:06:37.306207   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:37.306246   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:06:37.306253   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:37.306332   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:06:37.306421   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:37.306441   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:06:37.306445   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:37.306474   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:06:37.306512   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:37.306528   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:06:37.306532   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:37.306553   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:06:37.306602   59960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.ha-278127-m02 san=[127.0.0.1 192.168.49.3 ha-278127-m02 localhost minikube]
	I1126 20:06:37.781886   59960 provision.go:177] copyRemoteCerts
	I1126 20:06:37.782050   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:06:37.782113   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:37.799978   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:37.920744   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1126 20:06:37.920800   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:06:37.946353   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1126 20:06:37.946424   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 20:06:37.990628   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1126 20:06:37.990734   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:06:38.022932   59960 provision.go:87] duration metric: took 744.14174ms to configureAuth
	I1126 20:06:38.022999   59960 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:06:38.023281   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:38.023419   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:38.055902   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:38.056219   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:38.056232   59960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:06:39.163004   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:06:39.163066   59960 machine.go:97] duration metric: took 5.650194842s to provisionDockerMachine
	I1126 20:06:39.163087   59960 start.go:293] postStartSetup for "ha-278127-m02" (driver="docker")
	I1126 20:06:39.163098   59960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:06:39.163204   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:06:39.163258   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.194111   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.327619   59960 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:06:39.331483   59960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:06:39.331507   59960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:06:39.331518   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:06:39.331574   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:06:39.331649   59960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:06:39.331655   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /etc/ssl/certs/41292.pem
	I1126 20:06:39.331756   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:06:39.344886   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:39.377797   59960 start.go:296] duration metric: took 214.695598ms for postStartSetup
	I1126 20:06:39.377880   59960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:06:39.377991   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.402878   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.525023   59960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:06:39.531527   59960 fix.go:56] duration metric: took 6.492076268s for fixHost
	I1126 20:06:39.531551   59960 start.go:83] releasing machines lock for "ha-278127-m02", held for 6.492125467s
	I1126 20:06:39.531622   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:39.571062   59960 out.go:179] * Found network options:
	I1126 20:06:39.574101   59960 out.go:179]   - NO_PROXY=192.168.49.2
	W1126 20:06:39.577135   59960 proxy.go:120] fail to check proxy env: Error ip not in block
	W1126 20:06:39.577189   59960 proxy.go:120] fail to check proxy env: Error ip not in block
	I1126 20:06:39.577283   59960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:06:39.577298   59960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:06:39.577325   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.577353   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.610149   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.618182   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.847910   59960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:06:39.986067   59960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:06:39.986218   59960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:06:40.010567   59960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:06:40.010651   59960 start.go:496] detecting cgroup driver to use...
	I1126 20:06:40.010701   59960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:06:40.010777   59960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:06:40.066499   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:06:40.113187   59960 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:06:40.113357   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:06:40.138505   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:06:40.165558   59960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:06:40.434812   59960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:06:40.667360   59960 docker.go:234] disabling docker service ...
	I1126 20:06:40.667485   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:06:40.689020   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:06:40.712251   59960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:06:41.062262   59960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:06:41.446879   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:06:41.479018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:06:41.522736   59960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:06:41.522836   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.550554   59960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:06:41.550640   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.568877   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.605965   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.634535   59960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:06:41.647439   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.679616   59960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.700895   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.724575   59960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:06:41.743621   59960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:06:41.761053   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:42.179518   59960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:08:12.654700   59960 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.475140858s)
	I1126 20:08:12.654725   59960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:08:12.654777   59960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:08:12.658561   59960 start.go:564] Will wait 60s for crictl version
	I1126 20:08:12.658629   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:08:12.662122   59960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:08:12.694230   59960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:08:12.694320   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:08:12.723516   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:08:12.752895   59960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:08:12.755800   59960 out.go:179]   - env NO_PROXY=192.168.49.2
	I1126 20:08:12.758681   59960 cli_runner.go:164] Run: docker network inspect ha-278127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:08:12.774831   59960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 20:08:12.778729   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:08:12.788193   59960 mustload.go:66] Loading cluster: ha-278127
	I1126 20:08:12.788437   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:08:12.788732   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:08:12.805367   59960 host.go:66] Checking if "ha-278127" exists ...
	I1126 20:08:12.805673   59960 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127 for IP: 192.168.49.3
	I1126 20:08:12.805688   59960 certs.go:195] generating shared ca certs ...
	I1126 20:08:12.805703   59960 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:08:12.805829   59960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:08:12.805875   59960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:08:12.805885   59960 certs.go:257] generating profile certs ...
	I1126 20:08:12.806061   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key
	I1126 20:08:12.806134   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.28ad082f
	I1126 20:08:12.806177   59960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key
	I1126 20:08:12.806189   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1126 20:08:12.806203   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1126 20:08:12.806214   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1126 20:08:12.806227   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1126 20:08:12.806238   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1126 20:08:12.806249   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1126 20:08:12.806265   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1126 20:08:12.806276   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1126 20:08:12.806330   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:08:12.806364   59960 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:08:12.806376   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:08:12.806404   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:08:12.806431   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:08:12.806458   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:08:12.806505   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:08:12.806543   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:12.806557   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem -> /usr/share/ca-certificates/4129.pem
	I1126 20:08:12.806568   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /usr/share/ca-certificates/41292.pem
	I1126 20:08:12.806631   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:08:12.824408   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:08:12.926228   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1126 20:08:12.930801   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1126 20:08:12.939401   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1126 20:08:12.947934   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1126 20:08:12.960335   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1126 20:08:12.964526   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1126 20:08:12.973104   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1126 20:08:12.978204   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1126 20:08:12.987576   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1126 20:08:12.991901   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1126 20:08:13.001289   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1126 20:08:13.006200   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1126 20:08:13.014443   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:08:13.039341   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:08:13.063520   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:08:13.085219   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:08:13.103037   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:08:13.123095   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:08:13.140681   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:08:13.160781   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:08:13.180406   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:08:13.200475   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:08:13.221024   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:08:13.239900   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1126 20:08:13.254738   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1126 20:08:13.269631   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1126 20:08:13.285317   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1126 20:08:13.300359   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1126 20:08:13.320893   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1126 20:08:13.340300   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1126 20:08:13.361527   59960 ssh_runner.go:195] Run: openssl version
	I1126 20:08:13.368555   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:08:13.377244   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.381511   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.381624   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.427936   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:08:13.437023   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:08:13.445274   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.449571   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.449682   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.496315   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:08:13.504808   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:08:13.513181   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.517313   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.517396   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.579337   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:08:13.588179   59960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:08:13.593330   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:08:13.645107   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:08:13.691020   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:08:13.735436   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:08:13.780762   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:08:13.830095   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:08:13.873290   59960 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1126 20:08:13.873415   59960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-278127-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:08:13.873445   59960 kube-vip.go:115] generating kube-vip config ...
	I1126 20:08:13.873508   59960 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1126 20:08:13.885513   59960 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:08:13.885577   59960 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1126 20:08:13.885657   59960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:08:13.893550   59960 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:08:13.893628   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1126 20:08:13.901912   59960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1126 20:08:13.916015   59960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:08:13.934936   59960 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1126 20:08:13.979363   59960 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1126 20:08:13.991396   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:08:14.018397   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:08:14.385132   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:08:14.402828   59960 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:08:14.403147   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:08:14.408967   59960 out.go:179] * Verifying Kubernetes components...
	I1126 20:08:14.411916   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:08:14.659853   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:08:14.678979   59960 kapi.go:59] client config for ha-278127: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1126 20:08:14.679061   59960 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1126 20:08:14.679322   59960 node_ready.go:35] waiting up to 6m0s for node "ha-278127-m02" to be "Ready" ...
	I1126 20:08:15.269402   59960 node_ready.go:49] node "ha-278127-m02" is "Ready"
	I1126 20:08:15.269438   59960 node_ready.go:38] duration metric: took 590.083677ms for node "ha-278127-m02" to be "Ready" ...
	I1126 20:08:15.269450   59960 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:08:15.269508   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:15.770378   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:16.271005   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:16.769624   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:17.269646   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:17.770292   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:18.270233   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:18.770225   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:19.269626   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:19.770251   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:20.270592   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:20.769691   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:21.269742   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:21.769575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:22.269640   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:22.770094   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:23.269745   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:23.770093   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:24.269839   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:24.770626   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:25.270510   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:25.770352   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:26.270238   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:26.770199   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:27.270553   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:27.770570   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:28.269631   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:28.770575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:29.269663   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:29.770438   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:30.269733   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:30.769570   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:31.269688   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:31.770556   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:32.270505   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:32.770152   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:33.269716   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:33.769765   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:34.269659   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:34.769641   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:35.269866   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:35.770030   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:36.270158   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:36.770014   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:37.270234   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:37.769610   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:38.270567   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:38.770558   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:39.269653   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:39.769895   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:40.270407   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:40.769781   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:41.270338   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:41.770411   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:42.269686   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:42.770028   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:43.269580   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:43.769636   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:44.269684   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:44.769627   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:45.272055   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:45.770418   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:46.269657   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:46.770575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:47.270036   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:47.770377   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:48.270502   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:48.770450   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:49.269719   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:49.770449   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:50.269903   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:50.769675   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:51.270539   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:51.770618   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:52.270336   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:52.770354   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:53.270340   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:53.769901   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:54.270054   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:54.769747   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:55.270283   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:55.770525   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:56.269881   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:56.769908   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:57.269834   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:57.769631   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:58.270414   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:58.770529   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:59.269820   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:59.770577   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:00.269749   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:00.770275   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:01.270165   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:01.769910   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:02.269673   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:02.770492   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:03.270339   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:03.769642   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:04.269668   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:04.770177   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:05.270062   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:05.770571   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:06.270286   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:06.770466   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:07.269878   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:07.770593   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:08.270292   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:08.770068   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:09.269767   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:09.769619   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:10.270146   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:10.769659   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:11.270311   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:11.770596   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:12.269893   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:12.769649   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:13.270341   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:13.770530   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:14.269596   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:14.769532   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:14.769644   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:14.805181   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:14.805204   59960 cri.go:89] found id: ""
	I1126 20:09:14.805213   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:14.805269   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.809129   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:14.809206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:14.835451   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:14.835475   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:14.835480   59960 cri.go:89] found id: ""
	I1126 20:09:14.835487   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:14.835543   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.839249   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.842501   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:14.842574   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:14.867922   59960 cri.go:89] found id: ""
	I1126 20:09:14.867948   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.867957   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:14.867963   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:14.868022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:14.893599   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:14.893625   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:14.893630   59960 cri.go:89] found id: ""
	I1126 20:09:14.893638   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:14.893730   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.897540   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.901438   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:14.901540   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:14.929244   59960 cri.go:89] found id: ""
	I1126 20:09:14.929268   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.929277   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:14.929284   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:14.929340   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:14.956242   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:14.956264   59960 cri.go:89] found id: ""
	I1126 20:09:14.956272   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:14.956326   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.960197   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:14.960271   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:14.985332   59960 cri.go:89] found id: ""
	I1126 20:09:14.985407   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.985428   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:14.985455   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:14.985495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:15.015412   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:15.015491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:15.446082   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:15.438231    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.438877    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440458    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440891    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.442380    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:15.438231    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.438877    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440458    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440891    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.442380    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:15.446107   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:15.446122   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:15.474426   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:15.474452   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:15.514330   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:15.514364   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:15.582633   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:15.582662   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:15.636475   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:15.636508   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:15.718181   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:15.718215   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:15.814217   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:15.814253   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:15.826793   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:15.826823   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:15.854520   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:15.854550   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.382038   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:18.401602   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:18.401678   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:18.435808   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:18.435831   59960 cri.go:89] found id: ""
	I1126 20:09:18.435839   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:18.435907   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.439686   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:18.439801   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:18.476740   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:18.476764   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:18.476770   59960 cri.go:89] found id: ""
	I1126 20:09:18.476787   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:18.476889   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.480732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.484682   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:18.484783   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:18.511910   59960 cri.go:89] found id: ""
	I1126 20:09:18.511974   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.511989   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:18.511996   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:18.512055   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:18.547921   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:18.547988   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:18.548006   59960 cri.go:89] found id: ""
	I1126 20:09:18.548014   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:18.548071   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.552076   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.556982   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:18.557066   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:18.587286   59960 cri.go:89] found id: ""
	I1126 20:09:18.587313   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.587333   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:18.587340   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:18.587401   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:18.620541   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.620559   59960 cri.go:89] found id: ""
	I1126 20:09:18.620567   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:18.620626   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.624723   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:18.624796   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:18.653037   59960 cri.go:89] found id: ""
	I1126 20:09:18.653060   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.653068   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:18.653077   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:18.653090   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.684308   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:18.684335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:18.776764   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:18.776798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:18.865581   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:18.856655    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858014    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858939    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.859710    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.861248    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:18.856655    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858014    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858939    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.859710    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.861248    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:18.865603   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:18.865616   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:18.909234   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:18.909270   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:18.960436   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:18.960477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:18.990735   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:18.990766   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:19.069643   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:19.069722   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:19.104112   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:19.104137   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:19.118175   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:19.118204   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:19.148200   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:19.148229   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:21.687827   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:21.698536   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:21.698621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:21.730147   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:21.730171   59960 cri.go:89] found id: ""
	I1126 20:09:21.730180   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:21.730235   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.735922   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:21.736012   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:21.763452   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:21.763481   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:21.763486   59960 cri.go:89] found id: ""
	I1126 20:09:21.763494   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:21.763551   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.767451   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.771041   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:21.771140   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:21.803663   59960 cri.go:89] found id: ""
	I1126 20:09:21.803688   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.803697   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:21.803703   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:21.803767   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:21.832470   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:21.832496   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:21.832501   59960 cri.go:89] found id: ""
	I1126 20:09:21.832510   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:21.832567   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.836410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.840076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:21.840157   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:21.866968   59960 cri.go:89] found id: ""
	I1126 20:09:21.866994   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.867004   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:21.867011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:21.867093   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:21.892977   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:21.893000   59960 cri.go:89] found id: ""
	I1126 20:09:21.893008   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:21.893083   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.896906   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:21.897019   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:21.923720   59960 cri.go:89] found id: ""
	I1126 20:09:21.923744   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.923753   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:21.923762   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:21.923793   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:22.011751   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:22.003342    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.003880    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.005519    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.006189    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.007784    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:22.003342    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.003880    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.005519    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.006189    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.007784    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:22.011856   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:22.011890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:22.042091   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:22.042121   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:22.079857   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:22.079886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:22.179933   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:22.179973   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:22.207540   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:22.207568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:22.263434   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:22.263465   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:22.313145   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:22.313180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:22.365142   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:22.365177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:22.446886   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:22.446920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:22.483927   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:22.483961   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:24.996823   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:25.007913   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:25.007987   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:25.044777   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:25.044801   59960 cri.go:89] found id: ""
	I1126 20:09:25.044810   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:25.044870   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.048843   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:25.048923   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:25.083120   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:25.083187   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:25.083197   59960 cri.go:89] found id: ""
	I1126 20:09:25.083205   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:25.083271   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.086865   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.090526   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:25.090596   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:25.118710   59960 cri.go:89] found id: ""
	I1126 20:09:25.118735   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.118745   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:25.118752   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:25.118809   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:25.145818   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:25.145843   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:25.145850   59960 cri.go:89] found id: ""
	I1126 20:09:25.145857   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:25.145956   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.154268   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.159267   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:25.159348   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:25.185977   59960 cri.go:89] found id: ""
	I1126 20:09:25.186002   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.186011   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:25.186017   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:25.186072   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:25.213727   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:25.213751   59960 cri.go:89] found id: ""
	I1126 20:09:25.213760   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:25.213826   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.217850   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:25.217960   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:25.246743   59960 cri.go:89] found id: ""
	I1126 20:09:25.246769   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.246779   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:25.246788   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:25.246800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:25.321227   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:25.312798    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.313456    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315126    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315598    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.317138    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:25.312798    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.313456    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315126    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315598    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.317138    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:25.321251   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:25.321288   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:25.346983   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:25.347011   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:25.407991   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:25.408027   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:25.439857   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:25.439886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:25.467227   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:25.467252   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:25.549334   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:25.549371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:25.590791   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:25.590821   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:25.636096   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:25.636130   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:25.668287   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:25.668314   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:25.765804   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:25.765838   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:28.279160   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:28.290077   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:28.290149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:28.320697   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:28.320720   59960 cri.go:89] found id: ""
	I1126 20:09:28.320729   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:28.320786   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.324391   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:28.324466   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:28.351072   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:28.351094   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:28.351099   59960 cri.go:89] found id: ""
	I1126 20:09:28.351106   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:28.351161   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.355739   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.359260   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:28.359346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:28.386343   59960 cri.go:89] found id: ""
	I1126 20:09:28.386370   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.386383   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:28.386390   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:28.386457   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:28.413613   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:28.413635   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:28.413641   59960 cri.go:89] found id: ""
	I1126 20:09:28.413648   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:28.413701   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.417403   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.420731   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:28.420810   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:28.446127   59960 cri.go:89] found id: ""
	I1126 20:09:28.446202   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.446225   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:28.446245   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:28.446337   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:28.471432   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:28.471454   59960 cri.go:89] found id: ""
	I1126 20:09:28.471462   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:28.471545   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.475058   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:28.475141   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:28.502515   59960 cri.go:89] found id: ""
	I1126 20:09:28.502539   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.502549   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:28.502559   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:28.502570   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:28.514608   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:28.514637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:28.557861   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:28.557890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:28.627880   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:28.627917   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:28.659730   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:28.659757   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:28.725495   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:28.717349    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.718072    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.719611    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.720154    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.722097    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:28.717349    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.718072    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.719611    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.720154    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.722097    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:28.725519   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:28.725532   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:28.763157   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:28.763187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:28.828543   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:28.828573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:28.855674   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:28.855707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:28.888296   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:28.888323   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:28.966101   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:28.966135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:31.560965   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:31.571673   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:31.571744   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:31.601161   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:31.601182   59960 cri.go:89] found id: ""
	I1126 20:09:31.601190   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:31.601269   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.605397   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:31.605476   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:31.631813   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:31.631835   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:31.631841   59960 cri.go:89] found id: ""
	I1126 20:09:31.631848   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:31.631904   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.635710   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.639546   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:31.639621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:31.674540   59960 cri.go:89] found id: ""
	I1126 20:09:31.674569   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.674578   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:31.674585   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:31.674643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:31.705780   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:31.705799   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:31.705803   59960 cri.go:89] found id: ""
	I1126 20:09:31.705810   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:31.705865   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.709862   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.713500   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:31.713591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:31.739394   59960 cri.go:89] found id: ""
	I1126 20:09:31.739419   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.739429   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:31.739435   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:31.739492   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:31.765811   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:31.765834   59960 cri.go:89] found id: ""
	I1126 20:09:31.765842   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:31.765960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.769463   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:31.769554   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:31.802081   59960 cri.go:89] found id: ""
	I1126 20:09:31.802107   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.802116   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:31.802153   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:31.802172   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:31.849273   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:31.849308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:31.902662   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:31.902697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:31.990675   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:31.990710   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:32.022637   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:32.022667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:32.100797   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:32.092180    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.093036    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.094703    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.095415    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.097142    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:32.092180    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.093036    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.094703    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.095415    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.097142    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:32.100820   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:32.100833   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:32.146149   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:32.146184   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:32.172943   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:32.172970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:32.199037   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:32.199063   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:32.306507   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:32.306540   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:32.319193   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:32.319221   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:34.849302   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:34.860158   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:34.860250   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:34.887094   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:34.887113   59960 cri.go:89] found id: ""
	I1126 20:09:34.887121   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:34.887177   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.890890   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:34.890964   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:34.921149   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:34.921177   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:34.921182   59960 cri.go:89] found id: ""
	I1126 20:09:34.921189   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:34.921243   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.924938   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.928493   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:34.928569   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:34.954052   59960 cri.go:89] found id: ""
	I1126 20:09:34.954078   59960 logs.go:282] 0 containers: []
	W1126 20:09:34.954087   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:34.954093   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:34.954206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:34.985031   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:34.985054   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:34.985059   59960 cri.go:89] found id: ""
	I1126 20:09:34.985067   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:34.985121   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.989050   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.992852   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:34.992934   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:35.019287   59960 cri.go:89] found id: ""
	I1126 20:09:35.019314   59960 logs.go:282] 0 containers: []
	W1126 20:09:35.019323   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:35.019330   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:35.019393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:35.049190   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:35.049217   59960 cri.go:89] found id: ""
	I1126 20:09:35.049237   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:35.049313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:35.053627   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:35.053713   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:35.091326   59960 cri.go:89] found id: ""
	I1126 20:09:35.091394   59960 logs.go:282] 0 containers: []
	W1126 20:09:35.091420   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:35.091440   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:35.091476   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:35.188523   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:35.188560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:35.220725   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:35.220755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:35.250614   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:35.250643   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:35.289963   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:35.289995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:35.303153   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:35.303180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:35.375929   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:35.367382    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.368117    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.369869    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.370618    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.372228    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:35.367382    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.368117    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.369869    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.370618    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.372228    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:35.375952   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:35.375968   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:35.403037   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:35.403066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:35.445367   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:35.445402   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:35.491101   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:35.491135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:35.561489   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:35.561524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:38.150634   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:38.161275   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:38.161346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:38.189434   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:38.189461   59960 cri.go:89] found id: ""
	I1126 20:09:38.189469   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:38.189530   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.195206   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:38.195288   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:38.223137   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:38.223160   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:38.223166   59960 cri.go:89] found id: ""
	I1126 20:09:38.223173   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:38.223227   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.226977   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.230547   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:38.230624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:38.255698   59960 cri.go:89] found id: ""
	I1126 20:09:38.255723   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.255732   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:38.255742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:38.255800   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:38.285059   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:38.285082   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:38.285087   59960 cri.go:89] found id: ""
	I1126 20:09:38.285097   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:38.285151   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.288799   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.292713   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:38.292786   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:38.318862   59960 cri.go:89] found id: ""
	I1126 20:09:38.318889   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.318898   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:38.318905   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:38.318963   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:38.346973   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:38.346996   59960 cri.go:89] found id: ""
	I1126 20:09:38.347005   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:38.347057   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.350729   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:38.350856   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:38.378801   59960 cri.go:89] found id: ""
	I1126 20:09:38.378827   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.378836   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:38.378845   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:38.378915   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:38.390980   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:38.391009   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:38.422522   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:38.422550   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:38.469058   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:38.469133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:38.523109   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:38.523182   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:38.559691   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:38.559716   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:38.646468   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:38.646504   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:38.751509   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:38.751551   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:38.836492   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:38.827693    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.828759    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.829560    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.830636    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.831318    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:38.827693    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.828759    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.829560    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.830636    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.831318    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:38.836516   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:38.836528   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:38.876587   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:38.876623   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:38.910948   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:38.910987   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:41.443533   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:41.454798   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:41.454873   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:41.485670   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:41.485699   59960 cri.go:89] found id: ""
	I1126 20:09:41.485707   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:41.485761   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.489619   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:41.489690   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:41.525686   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:41.525710   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:41.525714   59960 cri.go:89] found id: ""
	I1126 20:09:41.525722   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:41.525777   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.536491   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.541670   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:41.541797   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:41.570295   59960 cri.go:89] found id: ""
	I1126 20:09:41.570319   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.570327   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:41.570334   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:41.570393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:41.598145   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:41.598169   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:41.598175   59960 cri.go:89] found id: ""
	I1126 20:09:41.598182   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:41.598258   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.602230   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.606445   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:41.606530   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:41.636614   59960 cri.go:89] found id: ""
	I1126 20:09:41.636637   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.636646   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:41.636652   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:41.636707   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:41.663292   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:41.663315   59960 cri.go:89] found id: ""
	I1126 20:09:41.663327   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:41.663382   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.667194   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:41.667277   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:41.696056   59960 cri.go:89] found id: ""
	I1126 20:09:41.696081   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.696090   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:41.696099   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:41.696110   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:41.794427   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:41.794463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:41.822463   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:41.822493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:41.871566   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:41.871599   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:41.916725   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:41.916759   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:41.950381   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:41.950410   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:41.982658   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:41.982692   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:41.996639   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:41.996672   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:42.087350   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:42.079184    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.079744    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081320    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081972    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.083647    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:42.079184    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.079744    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081320    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081972    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.083647    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:42.087369   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:42.087384   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:42.175919   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:42.176012   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:42.281379   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:42.281406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:44.882212   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:44.893873   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:44.893969   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:44.923663   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:44.923683   59960 cri.go:89] found id: ""
	I1126 20:09:44.923691   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:44.923744   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.927892   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:44.927959   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:44.958403   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:44.958423   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:44.958427   59960 cri.go:89] found id: ""
	I1126 20:09:44.958434   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:44.958486   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.962367   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.966913   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:44.966985   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:45.000482   59960 cri.go:89] found id: ""
	I1126 20:09:45.000503   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.000511   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:45.000517   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:45.000572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:45.031381   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:45.031401   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:45.031406   59960 cri.go:89] found id: ""
	I1126 20:09:45.031414   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:45.031471   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.036637   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.042551   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:45.042723   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:45.086906   59960 cri.go:89] found id: ""
	I1126 20:09:45.086987   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.087026   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:45.087050   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:45.087153   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:45.137504   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:45.137578   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:45.137598   59960 cri.go:89] found id: ""
	I1126 20:09:45.137621   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:45.137715   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.143678   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.149235   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:45.149438   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:45.196979   59960 cri.go:89] found id: ""
	I1126 20:09:45.197063   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.197089   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:45.197146   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:45.197191   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:45.267194   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:45.267280   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:45.386434   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:45.386524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:45.468233   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:45.459943    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.460742    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462336    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462624    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.464644    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:45.459943    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.460742    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462336    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462624    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.464644    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:45.468305   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:45.468342   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:45.541622   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:45.541649   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:45.613664   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:45.613695   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:45.641765   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:45.641794   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:45.702809   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:45.702837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:45.807019   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:45.807056   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:45.820258   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:45.820289   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:45.867345   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:45.867376   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:45.921560   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:45.921596   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:48.454091   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:48.464670   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:48.464755   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:48.493056   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:48.493081   59960 cri.go:89] found id: ""
	I1126 20:09:48.493089   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:48.493144   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.496943   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:48.497007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:48.524995   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:48.525020   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:48.525025   59960 cri.go:89] found id: ""
	I1126 20:09:48.525032   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:48.525085   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.528726   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.532247   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:48.532317   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:48.557862   59960 cri.go:89] found id: ""
	I1126 20:09:48.557887   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.557896   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:48.557902   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:48.557988   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:48.587744   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:48.587765   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:48.587770   59960 cri.go:89] found id: ""
	I1126 20:09:48.587777   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:48.587832   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.591388   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.594875   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:48.594985   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:48.627277   59960 cri.go:89] found id: ""
	I1126 20:09:48.627298   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.627313   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:48.627352   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:48.627433   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:48.664063   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:48.664088   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:48.664102   59960 cri.go:89] found id: ""
	I1126 20:09:48.664110   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:48.664222   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.668219   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.671608   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:48.671680   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:48.700294   59960 cri.go:89] found id: ""
	I1126 20:09:48.700322   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.700331   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:48.700340   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:48.700351   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:48.793887   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:48.793974   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:48.807445   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:48.807472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:48.881133   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:48.873596    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.874156    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.875737    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.876232    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.877299    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:48.873596    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.874156    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.875737    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.876232    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.877299    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:48.881155   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:48.881167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:48.926338   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:48.926370   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:48.980929   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:48.980964   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:49.008703   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:49.008729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:49.035020   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:49.035134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:49.075209   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:49.075239   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:49.102778   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:49.102808   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:49.148209   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:49.148243   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:49.175449   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:49.175477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:51.750461   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:51.761173   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:51.761247   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:51.792174   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:51.792200   59960 cri.go:89] found id: ""
	I1126 20:09:51.792207   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:51.792272   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.796194   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:51.796266   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:51.826309   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:51.826333   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:51.826339   59960 cri.go:89] found id: ""
	I1126 20:09:51.826346   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:51.826408   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.830049   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.833626   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:51.833703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:51.864668   59960 cri.go:89] found id: ""
	I1126 20:09:51.864693   59960 logs.go:282] 0 containers: []
	W1126 20:09:51.864702   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:51.864709   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:51.864770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:51.902154   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:51.902178   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:51.902184   59960 cri.go:89] found id: ""
	I1126 20:09:51.902191   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:51.902244   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.906099   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.909550   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:51.909622   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:51.940956   59960 cri.go:89] found id: ""
	I1126 20:09:51.940984   59960 logs.go:282] 0 containers: []
	W1126 20:09:51.940993   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:51.941000   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:51.941057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:51.967086   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:51.967112   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:51.967117   59960 cri.go:89] found id: ""
	I1126 20:09:51.967125   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:51.967206   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.970992   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.974344   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:51.974463   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:52.006654   59960 cri.go:89] found id: ""
	I1126 20:09:52.006675   59960 logs.go:282] 0 containers: []
	W1126 20:09:52.006684   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:52.006693   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:52.006705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:52.033587   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:52.033621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:52.062777   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:52.062810   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:52.136250   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:52.127112    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.127989    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.129548    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.130437    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.132317    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:52.127112    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.127989    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.129548    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.130437    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.132317    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:52.136279   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:52.136292   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:52.165716   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:52.165792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:52.210120   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:52.210157   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:52.266182   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:52.266228   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:52.296704   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:52.296732   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:52.373394   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:52.373432   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:52.409405   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:52.409436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:52.508717   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:52.508755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:52.520510   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:52.520577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.069988   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:55.081385   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:55.081477   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:55.109272   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:55.109297   59960 cri.go:89] found id: ""
	I1126 20:09:55.109306   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:55.109393   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.113332   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:55.113409   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:55.144644   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.144728   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:55.144749   59960 cri.go:89] found id: ""
	I1126 20:09:55.144782   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:55.144860   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.148962   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.153598   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:55.153724   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:55.180168   59960 cri.go:89] found id: ""
	I1126 20:09:55.180235   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.180274   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:55.180302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:55.180378   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:55.207578   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:55.207606   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:55.207611   59960 cri.go:89] found id: ""
	I1126 20:09:55.207621   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:55.207698   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.211665   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.215295   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:55.215371   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:55.243201   59960 cri.go:89] found id: ""
	I1126 20:09:55.243228   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.243237   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:55.243243   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:55.243299   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:55.273345   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:55.273370   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:55.273375   59960 cri.go:89] found id: ""
	I1126 20:09:55.273382   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:55.273434   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.277156   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.280557   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:55.280629   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:55.306973   59960 cri.go:89] found id: ""
	I1126 20:09:55.307037   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.307052   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:55.307061   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:55.307072   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:55.405440   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:55.405474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:55.418598   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:55.418628   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:55.487261   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:55.479261    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.479915    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481393    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481846    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.483618    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:55.479261    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.479915    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481393    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481846    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.483618    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:55.487286   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:55.487299   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:55.531555   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:55.531626   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:55.601020   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:55.601057   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:55.632319   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:55.632347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:55.660851   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:55.660881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:55.742963   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:55.742998   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:55.773047   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:55.773076   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.826960   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:55.826991   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:55.855917   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:55.855944   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:58.399772   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:58.415975   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:58.416043   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:58.442760   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:58.442782   59960 cri.go:89] found id: ""
	I1126 20:09:58.442792   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:58.442850   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.446527   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:58.446620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:58.476049   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:58.476071   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:58.476076   59960 cri.go:89] found id: ""
	I1126 20:09:58.476084   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:58.476141   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.480019   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.483716   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:58.483799   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:58.514116   59960 cri.go:89] found id: ""
	I1126 20:09:58.514138   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.514147   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:58.514153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:58.514220   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:58.547211   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:58.547233   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:58.547239   59960 cri.go:89] found id: ""
	I1126 20:09:58.547257   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:58.547342   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.551299   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.554848   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:58.554921   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:58.583768   59960 cri.go:89] found id: ""
	I1126 20:09:58.583793   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.583802   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:58.583809   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:58.583865   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:58.611601   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:58.611635   59960 cri.go:89] found id: ""
	I1126 20:09:58.611644   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:09:58.611703   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.615732   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:58.615802   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:58.646048   59960 cri.go:89] found id: ""
	I1126 20:09:58.646087   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.646096   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:58.646106   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:58.646135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:58.745296   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:58.745332   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:58.820265   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:58.811642    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.812262    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.813785    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.814448    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.815924    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:58.811642    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.812262    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.813785    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.814448    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.815924    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:58.820294   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:58.820308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:58.877523   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:58.877556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:58.904630   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:58.904656   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:58.980105   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:58.980138   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:58.992220   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:58.992248   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:59.019086   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:59.019112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:59.058229   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:59.058260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:59.106394   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:59.106427   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:59.134445   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:59.134474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:01.667677   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:01.679153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:01.679227   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:01.713101   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:01.713122   59960 cri.go:89] found id: ""
	I1126 20:10:01.713130   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:01.713185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.717042   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:01.717117   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:01.748792   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:01.748817   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:01.748823   59960 cri.go:89] found id: ""
	I1126 20:10:01.748832   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:01.748889   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.752752   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.756411   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:01.756487   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:01.785898   59960 cri.go:89] found id: ""
	I1126 20:10:01.785954   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.785964   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:01.785971   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:01.786033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:01.817470   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:01.817496   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:01.817502   59960 cri.go:89] found id: ""
	I1126 20:10:01.817509   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:01.817567   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.821688   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.826052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:01.826203   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:01.856542   59960 cri.go:89] found id: ""
	I1126 20:10:01.856568   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.856590   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:01.856620   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:01.856742   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:01.893138   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:01.893218   59960 cri.go:89] found id: ""
	I1126 20:10:01.893242   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:01.893337   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.897863   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:01.898026   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:01.935921   59960 cri.go:89] found id: ""
	I1126 20:10:01.935951   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.935961   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:01.935971   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:01.935985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:01.973303   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:01.973332   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:02.028454   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:02.028493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:02.074241   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:02.074272   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:02.162898   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:02.162936   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:02.176057   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:02.176088   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:02.235629   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:02.235665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:02.306607   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:02.306643   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:02.337699   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:02.337729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:02.374553   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:02.374582   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:02.481202   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:02.481238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:02.563313   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:02.555444    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.556211    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.557668    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.558242    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.559786    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:02.555444    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.556211    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.557668    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.558242    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.559786    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:05.064305   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:05.075852   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:05.075925   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:05.108322   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:05.108345   59960 cri.go:89] found id: ""
	I1126 20:10:05.108354   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:05.108410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.112382   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:05.112460   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:05.140946   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:05.141021   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:05.141040   59960 cri.go:89] found id: ""
	I1126 20:10:05.141063   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:05.141150   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.145278   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.148898   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:05.148974   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:05.176423   59960 cri.go:89] found id: ""
	I1126 20:10:05.176450   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.176459   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:05.176466   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:05.176527   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:05.204990   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:05.205013   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:05.205018   59960 cri.go:89] found id: ""
	I1126 20:10:05.205026   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:05.205088   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.208959   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.212627   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:05.212730   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:05.239581   59960 cri.go:89] found id: ""
	I1126 20:10:05.239604   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.239614   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:05.239620   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:05.239679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:05.268087   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:05.268110   59960 cri.go:89] found id: ""
	I1126 20:10:05.268119   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:05.268176   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.271819   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:05.271923   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:05.298753   59960 cri.go:89] found id: ""
	I1126 20:10:05.298819   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.298833   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:05.298843   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:05.298855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:05.325518   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:05.325548   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:05.376406   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:05.376438   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:05.428781   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:05.428943   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:05.459754   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:05.459786   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:05.487550   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:05.487581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:05.520035   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:05.520071   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:05.616425   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:05.616503   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:05.630189   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:05.630221   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:05.715272   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:05.705315    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.706188    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708012    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708749    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.710497    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:05.705315    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.706188    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708012    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708749    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.710497    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:05.715301   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:05.715315   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:05.768473   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:05.768507   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:08.349688   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:08.360619   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:08.360693   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:08.388583   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:08.388610   59960 cri.go:89] found id: ""
	I1126 20:10:08.388619   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:08.388678   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.392264   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:08.392334   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:08.418523   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:08.418549   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:08.418554   59960 cri.go:89] found id: ""
	I1126 20:10:08.418562   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:08.418621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.422368   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.425851   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:08.425954   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:08.456520   59960 cri.go:89] found id: ""
	I1126 20:10:08.456546   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.456555   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:08.456562   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:08.456620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:08.487158   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:08.487182   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:08.487186   59960 cri.go:89] found id: ""
	I1126 20:10:08.487195   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:08.487268   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.491193   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.494690   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:08.494760   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:08.523674   59960 cri.go:89] found id: ""
	I1126 20:10:08.523699   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.523708   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:08.523715   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:08.523773   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:08.569422   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:08.569442   59960 cri.go:89] found id: ""
	I1126 20:10:08.569449   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:08.569505   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.572997   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:08.573065   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:08.599736   59960 cri.go:89] found id: ""
	I1126 20:10:08.599763   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.599772   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:08.599781   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:08.599799   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:08.674461   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:08.665974    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.666705    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.668447    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.669108    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.670766    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:08.665974    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.666705    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.668447    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.669108    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.670766    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:08.674482   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:08.674495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:08.726546   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:08.726591   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:08.783639   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:08.783690   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:08.860709   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:08.860759   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:08.873030   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:08.873058   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:08.899170   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:08.899199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:08.940773   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:08.940855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:08.969671   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:08.969762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:09.001544   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:09.001621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:09.035799   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:09.035837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:11.634159   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:11.645145   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:11.645262   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:11.684091   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:11.684113   59960 cri.go:89] found id: ""
	I1126 20:10:11.684121   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:11.684198   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.687930   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:11.688002   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:11.716342   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:11.716366   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:11.716372   59960 cri.go:89] found id: ""
	I1126 20:10:11.716380   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:11.716438   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.720592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.724106   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:11.724181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:11.750971   59960 cri.go:89] found id: ""
	I1126 20:10:11.750997   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.751007   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:11.751014   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:11.751140   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:11.778888   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:11.778912   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:11.778917   59960 cri.go:89] found id: ""
	I1126 20:10:11.778924   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:11.778979   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.782704   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.786153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:11.786245   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:11.812859   59960 cri.go:89] found id: ""
	I1126 20:10:11.812924   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.812953   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:11.812972   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:11.813047   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:11.844995   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:11.845065   59960 cri.go:89] found id: ""
	I1126 20:10:11.845089   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:11.845159   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.848928   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:11.849056   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:11.878557   59960 cri.go:89] found id: ""
	I1126 20:10:11.878634   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.878657   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:11.878674   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:11.878686   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:11.911996   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:11.912024   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:11.957531   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:11.957700   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:12.002561   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:12.002600   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:12.037611   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:12.037655   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:12.124659   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:12.124695   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:12.157527   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:12.157559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:12.255561   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:12.255597   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:12.270701   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:12.270727   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:12.344084   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:12.335378    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.336132    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.337729    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.338527    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.340203    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:12.335378    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.336132    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.337729    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.338527    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.340203    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:12.344111   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:12.344127   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:12.414064   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:12.414099   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:14.957062   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:14.971279   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:14.971358   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:15.002850   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:15.002871   59960 cri.go:89] found id: ""
	I1126 20:10:15.002879   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:15.002953   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.007210   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:15.007317   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:15.044904   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:15.044929   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:15.044934   59960 cri.go:89] found id: ""
	I1126 20:10:15.044943   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:15.045037   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.050180   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.055192   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:15.055293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:15.087772   59960 cri.go:89] found id: ""
	I1126 20:10:15.087798   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.087815   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:15.087822   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:15.087883   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:15.117095   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:15.117114   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:15.117119   59960 cri.go:89] found id: ""
	I1126 20:10:15.117127   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:15.117185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.120995   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.124760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:15.124885   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:15.157854   59960 cri.go:89] found id: ""
	I1126 20:10:15.157954   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.157994   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:15.158017   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:15.158084   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:15.190383   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:15.190407   59960 cri.go:89] found id: ""
	I1126 20:10:15.190417   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:15.190474   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.194524   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:15.194624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:15.223311   59960 cri.go:89] found id: ""
	I1126 20:10:15.223337   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.223346   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:15.223355   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:15.223366   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:15.236105   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:15.236134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:15.263408   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:15.263436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:15.308099   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:15.308133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:15.370222   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:15.370258   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:15.412978   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:15.413009   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:15.482330   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:15.473679    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.474420    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476124    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476749    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.478398    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:15.473679    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.474420    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476124    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476749    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.478398    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:15.482403   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:15.482428   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:15.528305   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:15.528335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:15.564111   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:15.564138   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:15.592541   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:15.592569   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:15.673319   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:15.673357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:18.279646   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:18.290358   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:18.290427   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:18.319136   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:18.319159   59960 cri.go:89] found id: ""
	I1126 20:10:18.319168   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:18.319225   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.322893   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:18.322967   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:18.350092   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:18.350120   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:18.350126   59960 cri.go:89] found id: ""
	I1126 20:10:18.350139   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:18.350193   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.354777   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.358503   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:18.358602   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:18.396162   59960 cri.go:89] found id: ""
	I1126 20:10:18.396185   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.396193   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:18.396199   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:18.396262   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:18.430093   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:18.430119   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:18.430124   59960 cri.go:89] found id: ""
	I1126 20:10:18.430131   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:18.430196   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.434456   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.438374   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:18.438451   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:18.478030   59960 cri.go:89] found id: ""
	I1126 20:10:18.478058   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.478070   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:18.478076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:18.478137   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:18.506317   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:18.506340   59960 cri.go:89] found id: ""
	I1126 20:10:18.506349   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:18.506410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.510476   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:18.510552   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:18.550337   59960 cri.go:89] found id: ""
	I1126 20:10:18.550408   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.550436   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:18.550454   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:18.550487   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:18.621602   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:18.613602    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.614230    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.615899    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.616339    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.617881    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:18.613602    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.614230    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.615899    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.616339    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.617881    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:18.621625   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:18.621638   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:18.648795   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:18.648824   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:18.691314   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:18.691358   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:18.771327   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:18.771367   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:18.808287   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:18.808319   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:18.907011   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:18.907048   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:18.919575   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:18.919605   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:18.961664   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:18.961697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:19.020056   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:19.020092   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:19.050179   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:19.050206   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:21.599106   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:21.611209   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:21.611309   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:21.639207   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:21.639229   59960 cri.go:89] found id: ""
	I1126 20:10:21.639238   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:21.639296   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.643290   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:21.643365   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:21.675608   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:21.675633   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:21.675639   59960 cri.go:89] found id: ""
	I1126 20:10:21.675648   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:21.675702   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.679772   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.683385   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:21.683511   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:21.719004   59960 cri.go:89] found id: ""
	I1126 20:10:21.719078   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.719102   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:21.719123   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:21.719196   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:21.745555   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:21.745634   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:21.745660   59960 cri.go:89] found id: ""
	I1126 20:10:21.745681   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:21.745750   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.750313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.753830   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:21.753907   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:21.781119   59960 cri.go:89] found id: ""
	I1126 20:10:21.781199   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.781222   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:21.781243   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:21.781347   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:21.809894   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:21.810006   59960 cri.go:89] found id: ""
	I1126 20:10:21.810022   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:21.810092   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.813756   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:21.813853   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:21.840725   59960 cri.go:89] found id: ""
	I1126 20:10:21.840751   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.840760   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:21.840769   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:21.840781   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:21.854145   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:21.854177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:21.884873   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:21.884902   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:21.936427   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:21.936463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:21.990170   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:21.990205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:22.077016   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:22.077064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:22.106941   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:22.106974   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:22.136672   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:22.136703   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:22.235594   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:22.235630   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:22.305008   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:22.295860    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.296666    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.298548    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.299084    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.300765    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:22.295860    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.296666    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.298548    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.299084    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.300765    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:22.305032   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:22.305046   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:22.378673   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:22.378711   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:24.920612   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:24.931941   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:24.932015   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:24.958956   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:24.958979   59960 cri.go:89] found id: ""
	I1126 20:10:24.958988   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:24.959047   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.962853   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:24.962931   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:24.989108   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:24.989130   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:24.989134   59960 cri.go:89] found id: ""
	I1126 20:10:24.989141   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:24.989195   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.992756   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.996360   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:24.996431   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:25.023636   59960 cri.go:89] found id: ""
	I1126 20:10:25.023660   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.023670   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:25.023676   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:25.023751   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:25.056300   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:25.056325   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:25.056331   59960 cri.go:89] found id: ""
	I1126 20:10:25.056339   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:25.056407   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.060822   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.066693   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:25.066825   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:25.098171   59960 cri.go:89] found id: ""
	I1126 20:10:25.098239   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.098258   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:25.098265   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:25.098344   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:25.129634   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:25.129655   59960 cri.go:89] found id: ""
	I1126 20:10:25.129664   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:25.129759   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.134599   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:25.134715   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:25.166870   59960 cri.go:89] found id: ""
	I1126 20:10:25.166896   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.166905   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:25.166918   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:25.166931   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:25.201303   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:25.201335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:25.234106   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:25.234132   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:25.335293   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:25.335329   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:25.367895   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:25.367920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:25.408499   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:25.408540   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:25.489459   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:25.489496   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:25.525614   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:25.525642   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:25.540937   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:25.541079   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:25.619457   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:25.611129    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.611986    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.613567    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.614319    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.615842    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:25.611129    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.611986    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.613567    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.614319    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.615842    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:25.619480   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:25.619494   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:25.667380   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:25.667419   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.233076   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:28.244698   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:28.244770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:28.272507   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:28.272530   59960 cri.go:89] found id: ""
	I1126 20:10:28.272538   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:28.272596   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.276257   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:28.276333   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:28.303315   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:28.303337   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:28.303342   59960 cri.go:89] found id: ""
	I1126 20:10:28.303349   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:28.303429   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.307300   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.310655   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:28.310727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:28.337118   59960 cri.go:89] found id: ""
	I1126 20:10:28.337140   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.337150   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:28.337156   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:28.337214   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:28.364328   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.364352   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:28.364358   59960 cri.go:89] found id: ""
	I1126 20:10:28.364374   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:28.364436   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.368741   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.372299   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:28.372385   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:28.398315   59960 cri.go:89] found id: ""
	I1126 20:10:28.398342   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.398351   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:28.398357   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:28.398418   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:28.426255   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:28.426276   59960 cri.go:89] found id: ""
	I1126 20:10:28.426287   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:28.426342   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.429863   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:28.430017   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:28.456908   59960 cri.go:89] found id: ""
	I1126 20:10:28.456933   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.456942   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:28.456951   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:28.456962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:28.532783   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:28.532820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:28.637119   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:28.637160   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:28.711269   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:28.702783    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.703978    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.704633    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706176    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706692    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:28.702783    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.703978    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.704633    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706176    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706692    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:28.711288   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:28.711304   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:28.737855   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:28.737883   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:28.789442   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:28.789477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:28.820705   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:28.820738   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:28.855530   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:28.855560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:28.868297   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:28.868324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:28.913639   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:28.913673   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.973350   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:28.973386   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:31.500924   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:31.511869   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:31.511943   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:31.546414   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:31.546447   59960 cri.go:89] found id: ""
	I1126 20:10:31.546456   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:31.546559   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.550296   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:31.550368   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:31.577840   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:31.577859   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:31.577864   59960 cri.go:89] found id: ""
	I1126 20:10:31.577870   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:31.577967   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.581789   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.585352   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:31.585421   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:31.616396   59960 cri.go:89] found id: ""
	I1126 20:10:31.616419   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.616428   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:31.616435   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:31.616491   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:31.641907   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:31.641971   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:31.641977   59960 cri.go:89] found id: ""
	I1126 20:10:31.641984   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:31.642048   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.645886   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.649651   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:31.649732   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:31.682488   59960 cri.go:89] found id: ""
	I1126 20:10:31.682512   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.682521   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:31.682527   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:31.682597   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:31.713608   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:31.713632   59960 cri.go:89] found id: ""
	I1126 20:10:31.713641   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:31.713693   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.717274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:31.717349   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:31.750907   59960 cri.go:89] found id: ""
	I1126 20:10:31.750934   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.750948   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:31.750957   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:31.750970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:31.822403   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:31.813458    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.814237    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.815876    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.816493    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.818239    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:31.813458    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.814237    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.815876    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.816493    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.818239    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:31.822425   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:31.822440   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:31.849676   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:31.849705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:31.891923   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:31.891959   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:31.944564   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:31.944608   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:32.015493   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:32.015577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:32.047447   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:32.047480   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:32.127183   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:32.127225   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:32.229734   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:32.229767   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:32.243678   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:32.243719   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:32.271264   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:32.271291   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:34.809253   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:34.819692   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:34.819817   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:34.846220   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:34.846240   59960 cri.go:89] found id: ""
	I1126 20:10:34.846248   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:34.846302   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.849960   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:34.850035   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:34.875486   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:34.875510   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:34.875515   59960 cri.go:89] found id: ""
	I1126 20:10:34.875522   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:34.875591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.879655   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.883266   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:34.883341   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:34.910257   59960 cri.go:89] found id: ""
	I1126 20:10:34.910286   59960 logs.go:282] 0 containers: []
	W1126 20:10:34.910295   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:34.910302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:34.910359   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:34.936501   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:34.936526   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:34.936531   59960 cri.go:89] found id: ""
	I1126 20:10:34.936539   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:34.936602   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.940297   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.943886   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:34.943960   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:34.970440   59960 cri.go:89] found id: ""
	I1126 20:10:34.970467   59960 logs.go:282] 0 containers: []
	W1126 20:10:34.970476   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:34.970482   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:34.970540   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:34.996813   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:34.996833   59960 cri.go:89] found id: ""
	I1126 20:10:34.996842   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:34.996901   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:35.000962   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:35.001030   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:35.029207   59960 cri.go:89] found id: ""
	I1126 20:10:35.029229   59960 logs.go:282] 0 containers: []
	W1126 20:10:35.029237   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:35.029247   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:35.029259   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:35.089280   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:35.089316   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:35.137518   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:35.137557   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:35.198701   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:35.198741   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:35.226526   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:35.226560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:35.308302   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:35.308341   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:35.411713   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:35.411751   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:35.425089   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:35.425118   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:35.496500   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:35.487044    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.487890    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.489861    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.490651    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.492443    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:35.487044    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.487890    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.489861    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.490651    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.492443    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:35.496523   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:35.496538   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:35.521713   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:35.521740   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:35.552491   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:35.552520   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:38.092147   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:38.105386   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:38.105494   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:38.134115   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:38.134183   59960 cri.go:89] found id: ""
	I1126 20:10:38.134204   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:38.134297   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.138342   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:38.138463   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:38.165373   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:38.165448   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:38.165468   59960 cri.go:89] found id: ""
	I1126 20:10:38.165492   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:38.165591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.169464   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.173100   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:38.173220   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:38.201795   59960 cri.go:89] found id: ""
	I1126 20:10:38.201818   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.201826   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:38.201836   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:38.201895   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:38.234752   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:38.234776   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:38.234782   59960 cri.go:89] found id: ""
	I1126 20:10:38.234789   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:38.234845   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.239023   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.242779   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:38.242854   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:38.271155   59960 cri.go:89] found id: ""
	I1126 20:10:38.271184   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.271193   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:38.271200   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:38.271261   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:38.298657   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:38.298682   59960 cri.go:89] found id: ""
	I1126 20:10:38.298691   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:38.298766   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.302858   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:38.302929   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:38.330494   59960 cri.go:89] found id: ""
	I1126 20:10:38.330520   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.330529   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:38.330538   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:38.330570   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:38.356340   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:38.356374   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:38.401509   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:38.401542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:38.463681   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:38.463719   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:38.496848   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:38.496881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:38.524848   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:38.524875   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:38.607033   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:38.607098   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:38.709803   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:38.709840   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:38.722963   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:38.722995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:38.796592   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:38.787909    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.788704    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.790425    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.791012    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.792912    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:38.787909    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.788704    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.790425    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.791012    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.792912    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:38.796617   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:38.796635   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:38.836671   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:38.836707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:41.373598   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:41.384711   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:41.384792   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:41.414012   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:41.414038   59960 cri.go:89] found id: ""
	I1126 20:10:41.414047   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:41.414103   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.417961   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:41.418036   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:41.450051   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:41.450076   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:41.450082   59960 cri.go:89] found id: ""
	I1126 20:10:41.450089   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:41.450147   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.455240   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.459174   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:41.459275   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:41.487216   59960 cri.go:89] found id: ""
	I1126 20:10:41.487241   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.487250   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:41.487257   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:41.487340   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:41.515666   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:41.515739   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:41.515751   59960 cri.go:89] found id: ""
	I1126 20:10:41.515759   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:41.515817   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.519735   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.523565   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:41.523639   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:41.554213   59960 cri.go:89] found id: ""
	I1126 20:10:41.554240   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.554250   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:41.554256   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:41.554321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:41.584766   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:41.584790   59960 cri.go:89] found id: ""
	I1126 20:10:41.584799   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:41.584861   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.589437   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:41.589510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:41.616610   59960 cri.go:89] found id: ""
	I1126 20:10:41.616638   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.616648   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:41.616657   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:41.616669   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:41.696316   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:41.696352   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:41.765798   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:41.758434    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.758824    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760333    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760643    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.762180    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:41.758434    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.758824    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760333    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760643    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.762180    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:41.765870   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:41.765900   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:41.791490   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:41.791517   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:41.827993   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:41.828022   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:41.854480   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:41.854511   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:41.885603   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:41.885632   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:41.984936   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:41.984970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:41.997672   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:41.997701   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:42.039613   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:42.039668   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:42.100317   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:42.100359   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:44.745690   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:44.756208   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:44.756277   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:44.793586   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:44.793606   59960 cri.go:89] found id: ""
	I1126 20:10:44.793614   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:44.793666   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.797466   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:44.797561   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:44.823288   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:44.823313   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:44.823319   59960 cri.go:89] found id: ""
	I1126 20:10:44.823326   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:44.823383   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.828270   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.832190   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:44.832260   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:44.858643   59960 cri.go:89] found id: ""
	I1126 20:10:44.858694   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.858704   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:44.858711   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:44.858772   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:44.887625   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:44.887711   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:44.887722   59960 cri.go:89] found id: ""
	I1126 20:10:44.887730   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:44.887791   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.891593   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.895076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:44.895151   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:44.924994   59960 cri.go:89] found id: ""
	I1126 20:10:44.925060   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.925085   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:44.925104   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:44.925196   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:44.951783   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:44.951807   59960 cri.go:89] found id: ""
	I1126 20:10:44.951816   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:44.951874   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.955505   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:44.955620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:44.982789   59960 cri.go:89] found id: ""
	I1126 20:10:44.982814   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.982822   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:44.982831   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:44.982843   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:45.010557   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:45.010586   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:45.141549   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:45.141632   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:45.253485   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:45.253554   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:45.353619   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:45.353660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:45.408761   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:45.408795   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:45.443664   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:45.443692   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:45.470742   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:45.470773   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:45.504515   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:45.504544   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:45.608220   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:45.608254   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:45.620732   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:45.620761   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:45.707896   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:45.695026    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.696388    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.697297    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.699791    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.700340    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:45.695026    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.696388    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.697297    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.699791    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.700340    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:48.209609   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:48.220742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:48.220811   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:48.247863   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:48.247886   59960 cri.go:89] found id: ""
	I1126 20:10:48.247894   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:48.247949   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.251929   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:48.251997   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:48.280449   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:48.280470   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:48.280475   59960 cri.go:89] found id: ""
	I1126 20:10:48.280483   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:48.280537   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.284732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.288315   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:48.288405   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:48.316409   59960 cri.go:89] found id: ""
	I1126 20:10:48.316432   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.316440   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:48.316446   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:48.316506   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:48.349208   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:48.349271   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:48.349289   59960 cri.go:89] found id: ""
	I1126 20:10:48.349316   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:48.349408   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.354353   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.357751   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:48.357848   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:48.385059   59960 cri.go:89] found id: ""
	I1126 20:10:48.385081   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.385090   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:48.385107   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:48.385185   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:48.411304   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:48.411326   59960 cri.go:89] found id: ""
	I1126 20:10:48.411334   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:48.411405   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.415053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:48.415156   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:48.441024   59960 cri.go:89] found id: ""
	I1126 20:10:48.441046   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.441055   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:48.441063   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:48.441075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:48.469644   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:48.469672   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:48.510776   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:48.510859   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:48.592885   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:48.592917   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:48.620191   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:48.620216   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:48.715671   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:48.715746   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:48.730976   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:48.731004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:48.784446   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:48.784483   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:48.816189   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:48.816220   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:48.894569   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:48.894607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:48.934181   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:48.934214   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:49.000322   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:48.992247    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.992990    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994167    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994648    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.996101    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:48.992247    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.992990    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994167    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994648    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.996101    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:51.500568   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:51.512500   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:51.512570   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:51.550166   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:51.550188   59960 cri.go:89] found id: ""
	I1126 20:10:51.550196   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:51.550253   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.554115   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:51.554221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:51.580857   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:51.580880   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:51.580885   59960 cri.go:89] found id: ""
	I1126 20:10:51.580893   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:51.580949   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.584903   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.588661   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:51.588730   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:51.620121   59960 cri.go:89] found id: ""
	I1126 20:10:51.620147   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.620156   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:51.620163   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:51.620225   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:51.648043   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:51.648066   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:51.648071   59960 cri.go:89] found id: ""
	I1126 20:10:51.648079   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:51.648144   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.652146   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.656590   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:51.656658   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:51.684798   59960 cri.go:89] found id: ""
	I1126 20:10:51.684825   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.684835   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:51.684842   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:51.684900   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:51.712247   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:51.712270   59960 cri.go:89] found id: ""
	I1126 20:10:51.712279   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:51.712334   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.716105   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:51.716235   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:51.755296   59960 cri.go:89] found id: ""
	I1126 20:10:51.755373   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.755389   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:51.755400   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:51.755412   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:51.782840   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:51.782871   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:51.826403   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:51.826436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:51.894112   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:51.894148   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:51.920185   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:51.920212   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:51.993815   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:51.993856   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:52.030774   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:52.030804   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:52.112821   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:52.103396    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.104540    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.105295    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.106939    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.107489    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:52.103396    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.104540    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.105295    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.106939    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.107489    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:52.112847   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:52.112861   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:52.161738   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:52.161771   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:52.193340   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:52.193368   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:52.291814   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:52.291862   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:54.810104   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:54.820898   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:54.820971   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:54.849431   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:54.849454   59960 cri.go:89] found id: ""
	I1126 20:10:54.849462   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:54.849524   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.853394   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:54.853465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:54.879833   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:54.879855   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:54.879860   59960 cri.go:89] found id: ""
	I1126 20:10:54.879867   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:54.879926   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.883636   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.887200   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:54.887280   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:54.913349   59960 cri.go:89] found id: ""
	I1126 20:10:54.913374   59960 logs.go:282] 0 containers: []
	W1126 20:10:54.913382   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:54.913389   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:54.913446   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:54.941189   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:54.941215   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:54.941221   59960 cri.go:89] found id: ""
	I1126 20:10:54.941229   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:54.941285   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.945133   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.948594   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:54.948673   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:54.977649   59960 cri.go:89] found id: ""
	I1126 20:10:54.977677   59960 logs.go:282] 0 containers: []
	W1126 20:10:54.977687   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:54.977693   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:54.977768   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:55.008912   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:55.008938   59960 cri.go:89] found id: ""
	I1126 20:10:55.008948   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:55.009005   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:55.012659   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:55.012727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:55.056313   59960 cri.go:89] found id: ""
	I1126 20:10:55.056393   59960 logs.go:282] 0 containers: []
	W1126 20:10:55.056419   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:55.056449   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:55.056478   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:55.170137   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:55.170180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:55.194458   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:55.194489   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:55.279906   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:55.272019    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.272480    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274150    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274543    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.276078    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:55.272019    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.272480    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274150    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274543    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.276078    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:55.279931   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:55.279945   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:55.321902   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:55.321949   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:55.351446   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:55.351474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:55.426688   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:55.426723   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:55.463472   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:55.463501   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:55.510565   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:55.510598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:55.580501   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:55.580534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:55.614574   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:55.614602   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:58.162969   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:58.173910   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:58.174019   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:58.202329   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:58.202352   59960 cri.go:89] found id: ""
	I1126 20:10:58.202360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:58.202415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.206274   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:58.206347   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:58.233721   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:58.233741   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:58.233745   59960 cri.go:89] found id: ""
	I1126 20:10:58.233753   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:58.233811   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.237802   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.242346   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:58.242419   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:58.271013   59960 cri.go:89] found id: ""
	I1126 20:10:58.271038   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.271047   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:58.271053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:58.271109   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:58.298515   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:58.298538   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:58.298553   59960 cri.go:89] found id: ""
	I1126 20:10:58.298560   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:58.298617   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.302497   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.306172   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:58.306241   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:58.331672   59960 cri.go:89] found id: ""
	I1126 20:10:58.331698   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.331707   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:58.331714   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:58.331819   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:58.359197   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:58.359219   59960 cri.go:89] found id: ""
	I1126 20:10:58.359228   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:58.359307   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.363274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:58.363346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:58.403777   59960 cri.go:89] found id: ""
	I1126 20:10:58.403804   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.403814   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:58.403829   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:58.403890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:58.504667   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:58.504702   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:58.517722   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:58.517750   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:58.589740   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:58.581328    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.582205    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.583896    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.584218    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.585780    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:58.581328    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.582205    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.583896    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.584218    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.585780    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:58.589761   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:58.589774   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:58.617621   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:58.617648   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:58.660238   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:58.660281   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:58.709585   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:58.709624   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:58.783550   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:58.783586   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:58.820181   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:58.820219   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:58.848533   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:58.848564   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:58.921350   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:58.921390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:01.453687   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:01.467262   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:01.467365   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:01.498662   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:01.498715   59960 cri.go:89] found id: ""
	I1126 20:11:01.498724   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:01.498785   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.504322   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:01.504445   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:01.545072   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:01.545098   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:01.545105   59960 cri.go:89] found id: ""
	I1126 20:11:01.545113   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:01.545185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.548993   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.552685   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:01.552797   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:01.582855   59960 cri.go:89] found id: ""
	I1126 20:11:01.582881   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.582891   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:01.582897   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:01.582954   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:01.613527   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:01.613548   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:01.613553   59960 cri.go:89] found id: ""
	I1126 20:11:01.613560   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:01.613629   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.618859   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.623550   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:01.623624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:01.660116   59960 cri.go:89] found id: ""
	I1126 20:11:01.660140   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.660149   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:01.660159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:01.660221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:01.692418   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:01.692442   59960 cri.go:89] found id: ""
	I1126 20:11:01.692450   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:01.692509   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.696379   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:01.696453   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:01.729407   59960 cri.go:89] found id: ""
	I1126 20:11:01.729430   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.729439   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:01.729447   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:01.729463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:01.784458   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:01.784492   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:01.872850   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:01.872886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:01.903039   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:01.903068   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:01.942057   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:01.942084   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:02.024475   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:02.024514   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:02.128096   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:02.128133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:02.199528   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:02.191565    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.192150    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.193873    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.194411    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.195999    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:02.191565    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.192150    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.193873    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.194411    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.195999    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:02.199554   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:02.199568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:02.226949   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:02.226985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:02.270517   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:02.270555   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:02.306879   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:02.306948   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:04.822921   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:04.834951   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:04.835018   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:04.862163   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:04.862219   59960 cri.go:89] found id: ""
	I1126 20:11:04.862244   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:04.862312   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.865957   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:04.866029   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:04.895638   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:04.895658   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:04.895663   59960 cri.go:89] found id: ""
	I1126 20:11:04.895669   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:04.895722   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.899645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.903838   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:04.903909   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:04.929326   59960 cri.go:89] found id: ""
	I1126 20:11:04.929389   59960 logs.go:282] 0 containers: []
	W1126 20:11:04.929422   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:04.929442   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:04.929522   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:04.956401   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:04.956472   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:04.956491   59960 cri.go:89] found id: ""
	I1126 20:11:04.956522   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:04.956593   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.960195   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.963812   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:04.963930   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:04.990366   59960 cri.go:89] found id: ""
	I1126 20:11:04.990387   59960 logs.go:282] 0 containers: []
	W1126 20:11:04.990395   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:04.990402   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:04.990468   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:05.019718   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:05.019752   59960 cri.go:89] found id: ""
	I1126 20:11:05.019762   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:05.019824   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:05.023681   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:05.023779   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:05.053886   59960 cri.go:89] found id: ""
	I1126 20:11:05.053915   59960 logs.go:282] 0 containers: []
	W1126 20:11:05.053953   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:05.053963   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:05.053994   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:05.152926   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:05.152963   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:05.165506   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:05.165534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:05.194915   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:05.194945   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:05.235104   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:05.235137   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:05.285215   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:05.285247   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:05.314134   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:05.314162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:05.341007   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:05.341034   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:05.418277   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:05.418313   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:05.491273   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:05.482790    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.483758    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.485510    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.486097    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.487714    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:05.482790    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.483758    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.485510    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.486097    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.487714    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:05.491294   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:05.491308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:05.552151   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:05.552187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:08.086064   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:08.097504   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:08.097574   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:08.126757   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:08.126780   59960 cri.go:89] found id: ""
	I1126 20:11:08.126789   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:08.126851   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.131043   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:08.131119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:08.158212   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:08.158274   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:08.158289   59960 cri.go:89] found id: ""
	I1126 20:11:08.158297   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:08.158360   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.162104   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.166980   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:08.167053   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:08.193258   59960 cri.go:89] found id: ""
	I1126 20:11:08.193290   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.193300   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:08.193307   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:08.193374   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:08.219187   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:08.219210   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:08.219216   59960 cri.go:89] found id: ""
	I1126 20:11:08.219234   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:08.219313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.223489   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.227150   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:08.227228   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:08.255318   59960 cri.go:89] found id: ""
	I1126 20:11:08.255340   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.255348   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:08.255355   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:08.255411   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:08.282171   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:08.282194   59960 cri.go:89] found id: ""
	I1126 20:11:08.282202   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:08.282273   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.285788   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:08.285852   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:08.315430   59960 cri.go:89] found id: ""
	I1126 20:11:08.315505   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.315538   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:08.315560   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:08.315580   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:08.345199   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:08.345268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:08.441184   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:08.441220   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:08.511176   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:08.500509    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.501151    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504004    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504546    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.506870    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:08.500509    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.501151    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504004    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504546    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.506870    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:08.511208   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:08.511222   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:08.543421   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:08.543450   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:08.604175   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:08.604207   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:08.632557   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:08.632623   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:08.663480   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:08.663506   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:08.675096   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:08.675127   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:08.713968   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:08.713998   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:08.759141   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:08.759176   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:11.351574   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:11.361875   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:11.361972   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:11.388446   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:11.388515   59960 cri.go:89] found id: ""
	I1126 20:11:11.388529   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:11.388594   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.392093   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:11.392176   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:11.421855   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:11.421875   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:11.421880   59960 cri.go:89] found id: ""
	I1126 20:11:11.421887   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:11.421974   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.425675   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.429670   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:11.429770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:11.455248   59960 cri.go:89] found id: ""
	I1126 20:11:11.455272   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.455280   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:11.455287   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:11.455349   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:11.481734   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:11.481755   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:11.481761   59960 cri.go:89] found id: ""
	I1126 20:11:11.481769   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:11.481841   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.485836   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.489303   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:11.489380   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:11.521985   59960 cri.go:89] found id: ""
	I1126 20:11:11.522011   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.522020   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:11.522036   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:11.522095   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:11.561668   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:11.561700   59960 cri.go:89] found id: ""
	I1126 20:11:11.561708   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:11.561772   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.565986   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:11.566063   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:11.594364   59960 cri.go:89] found id: ""
	I1126 20:11:11.594386   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.594395   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:11.594404   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:11.594440   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:11.639020   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:11.639057   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:11.709026   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:11.709063   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:11.739742   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:11.739771   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:11.806014   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:11.797164    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798194    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798970    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.800645    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.801154    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:11.797164    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798194    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798970    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.800645    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.801154    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:11.806036   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:11.806048   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:11.844958   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:11.844991   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:11.876607   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:11.876634   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:11.911651   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:11.911677   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:11.991136   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:11.991170   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:12.094606   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:12.094650   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:12.107579   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:12.107609   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:14.637133   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:14.648286   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:14.648355   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:14.678404   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:14.678427   59960 cri.go:89] found id: ""
	I1126 20:11:14.678435   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:14.678495   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.682257   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:14.682330   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:14.713744   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:14.713765   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:14.713770   59960 cri.go:89] found id: ""
	I1126 20:11:14.713777   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:14.713835   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.718000   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.721792   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:14.721916   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:14.753701   59960 cri.go:89] found id: ""
	I1126 20:11:14.753767   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.753793   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:14.753812   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:14.753951   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:14.782584   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:14.782609   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:14.782615   59960 cri.go:89] found id: ""
	I1126 20:11:14.782622   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:14.782679   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.786288   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.790091   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:14.790165   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:14.816545   59960 cri.go:89] found id: ""
	I1126 20:11:14.816570   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.816579   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:14.816586   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:14.816642   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:14.846080   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:14.846100   59960 cri.go:89] found id: ""
	I1126 20:11:14.846108   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:14.846166   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.849789   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:14.849880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:14.876460   59960 cri.go:89] found id: ""
	I1126 20:11:14.876491   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.876500   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:14.876508   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:14.876518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:14.951236   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:14.951274   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:14.983322   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:14.983350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:15.061107   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:15.051102    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.052170    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.053243    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.054378    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.056334    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:15.051102    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.052170    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.053243    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.054378    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.056334    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:15.061129   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:15.061144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:15.097557   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:15.097587   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:15.138293   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:15.138326   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:15.168503   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:15.168532   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:15.267115   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:15.267150   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:15.279584   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:15.279615   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:15.326150   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:15.326184   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:15.389193   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:15.389226   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:17.918406   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:17.929053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:17.929122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:17.953884   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:17.953945   59960 cri.go:89] found id: ""
	I1126 20:11:17.953954   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:17.954015   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.957395   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:17.957465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:17.983711   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:17.983731   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:17.983735   59960 cri.go:89] found id: ""
	I1126 20:11:17.983742   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:17.983795   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.987660   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.991154   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:17.991224   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:18.019969   59960 cri.go:89] found id: ""
	I1126 20:11:18.019998   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.020008   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:18.020015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:18.020073   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:18.061149   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:18.061172   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:18.061178   59960 cri.go:89] found id: ""
	I1126 20:11:18.061186   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:18.061246   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.065578   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.069815   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:18.069885   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:18.096457   59960 cri.go:89] found id: ""
	I1126 20:11:18.096479   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.096487   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:18.096494   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:18.096554   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:18.124303   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:18.124367   59960 cri.go:89] found id: ""
	I1126 20:11:18.124392   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:18.124471   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.130707   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:18.130839   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:18.156714   59960 cri.go:89] found id: ""
	I1126 20:11:18.156740   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.156750   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:18.156759   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:18.156773   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:18.233800   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:18.233837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:18.264943   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:18.264973   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:18.343435   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:18.335872    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.336444    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.337906    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.338530    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.339816    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:18.335872    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.336444    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.337906    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.338530    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.339816    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:18.343458   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:18.343470   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:18.372998   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:18.373026   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:18.416461   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:18.416495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:18.445233   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:18.445263   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:18.545748   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:18.545787   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:18.557806   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:18.557835   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:18.622509   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:18.622542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:18.707610   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:18.707689   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:21.236452   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:21.247662   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:21.247729   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:21.276004   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:21.276030   59960 cri.go:89] found id: ""
	I1126 20:11:21.276038   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:21.276125   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.279851   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:21.279945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:21.309267   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:21.309291   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:21.309297   59960 cri.go:89] found id: ""
	I1126 20:11:21.309304   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:21.309359   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.313384   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.317026   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:21.317099   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:21.347773   59960 cri.go:89] found id: ""
	I1126 20:11:21.347799   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.347807   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:21.347817   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:21.347901   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:21.389878   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:21.389898   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:21.389902   59960 cri.go:89] found id: ""
	I1126 20:11:21.389910   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:21.390028   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.396218   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.405704   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:21.405823   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:21.458505   59960 cri.go:89] found id: ""
	I1126 20:11:21.458573   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.458605   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:21.458635   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:21.458731   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:21.486896   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:21.486961   59960 cri.go:89] found id: ""
	I1126 20:11:21.486983   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:21.487052   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.490729   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:21.490845   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:21.521776   59960 cri.go:89] found id: ""
	I1126 20:11:21.521798   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.521806   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:21.521815   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:21.521827   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:21.540126   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:21.540201   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:21.612034   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:21.604355    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.605075    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.606757    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.607410    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.608381    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:21.604355    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.605075    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.606757    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.607410    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.608381    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:21.612058   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:21.612072   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:21.658622   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:21.658657   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:21.707807   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:21.707844   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:21.769271   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:21.769306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:21.801295   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:21.801325   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:21.896605   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:21.896639   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:21.929176   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:21.929205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:21.967857   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:21.967884   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:22.001350   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:22.001375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:24.595423   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:24.606910   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:24.606980   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:24.638795   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:24.638819   59960 cri.go:89] found id: ""
	I1126 20:11:24.638827   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:24.638885   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.642601   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:24.642677   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:24.709965   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:24.709984   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:24.709989   59960 cri.go:89] found id: ""
	I1126 20:11:24.709996   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:24.710075   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.714848   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.719509   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:24.719668   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:24.756426   59960 cri.go:89] found id: ""
	I1126 20:11:24.756497   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.756521   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:24.756540   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:24.756658   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:24.803189   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:24.803256   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:24.803274   59960 cri.go:89] found id: ""
	I1126 20:11:24.803295   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:24.803379   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.808196   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.812071   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:24.812194   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:24.852305   59960 cri.go:89] found id: ""
	I1126 20:11:24.852378   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.852408   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:24.852429   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:24.852520   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:24.889194   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:24.889263   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:24.889294   59960 cri.go:89] found id: ""
	I1126 20:11:24.889320   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:24.889413   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.893347   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.897224   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:24.897334   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:24.930230   59960 cri.go:89] found id: ""
	I1126 20:11:24.930304   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.930333   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:24.930344   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:24.930371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:25.035563   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:25.035604   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:25.054082   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:25.054112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:25.096053   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:25.096081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:25.145970   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:25.146007   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:25.185648   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:25.185678   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:25.214168   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:25.214199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:25.247077   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:25.247106   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:25.338812   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:25.330325    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.331301    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.332972    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.333487    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.335076    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:25.330325    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.331301    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.332972    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.333487    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.335076    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:25.338839   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:25.338854   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:25.379564   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:25.379600   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:25.447694   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:25.447730   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:25.472568   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:25.472598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:28.058550   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:28.076007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:28.076082   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:28.106329   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:28.106351   59960 cri.go:89] found id: ""
	I1126 20:11:28.106360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:28.106418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.110514   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:28.110591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:28.140757   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:28.140777   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:28.140782   59960 cri.go:89] found id: ""
	I1126 20:11:28.140789   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:28.140842   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.144844   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.148401   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:28.148473   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:28.174921   59960 cri.go:89] found id: ""
	I1126 20:11:28.174944   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.174953   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:28.174959   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:28.175022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:28.202405   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:28.202425   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:28.202429   59960 cri.go:89] found id: ""
	I1126 20:11:28.202436   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:28.202491   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.207455   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.211480   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:28.211548   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:28.239676   59960 cri.go:89] found id: ""
	I1126 20:11:28.239749   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.239773   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:28.239793   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:28.239857   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:28.269256   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:28.269277   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:28.269282   59960 cri.go:89] found id: ""
	I1126 20:11:28.269289   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:28.269344   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.273004   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.276329   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:28.276398   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:28.302206   59960 cri.go:89] found id: ""
	I1126 20:11:28.302272   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.302298   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:28.302321   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:28.302363   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:28.332034   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:28.332062   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:28.376567   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:28.376603   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:28.441530   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:28.441568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:28.468188   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:28.468219   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:28.544745   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:28.544780   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:28.590841   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:28.590870   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:28.603163   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:28.603194   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:28.675368   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:28.666467    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.667143    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.668892    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.669848    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.671529    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:28.666467    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.667143    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.668892    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.669848    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.671529    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:28.675390   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:28.675403   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:28.716129   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:28.716160   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:28.746889   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:28.746916   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:28.784649   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:28.784678   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:31.386032   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:31.396663   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:31.396729   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:31.424252   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:31.424274   59960 cri.go:89] found id: ""
	I1126 20:11:31.424282   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:31.424337   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.427909   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:31.427983   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:31.459053   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:31.459075   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:31.459080   59960 cri.go:89] found id: ""
	I1126 20:11:31.459088   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:31.459148   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.462802   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.466564   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:31.466687   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:31.497981   59960 cri.go:89] found id: ""
	I1126 20:11:31.498003   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.498012   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:31.498018   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:31.498110   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:31.526027   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:31.526052   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:31.526057   59960 cri.go:89] found id: ""
	I1126 20:11:31.526065   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:31.526170   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.529987   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.534855   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:31.534945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:31.563109   59960 cri.go:89] found id: ""
	I1126 20:11:31.563169   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.563198   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:31.563219   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:31.563293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:31.589243   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:31.589265   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:31.589270   59960 cri.go:89] found id: ""
	I1126 20:11:31.589278   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:31.589354   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.593459   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.596946   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:31.597021   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:31.623525   59960 cri.go:89] found id: ""
	I1126 20:11:31.623558   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.623567   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:31.623576   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:31.623587   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:31.652294   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:31.652373   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:31.735258   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:31.735294   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:31.768608   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:31.768683   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:31.870428   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:31.870508   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:31.897014   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:31.897042   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:32.001263   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:32.001299   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:32.038474   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:32.038514   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:32.052890   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:32.052925   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:32.157895   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:32.150135    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.150798    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152292    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152811    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.154388    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:32.150135    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.150798    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152292    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152811    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.154388    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:32.157991   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:32.158015   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:32.202276   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:32.202312   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:32.246886   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:32.246920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:34.774920   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:34.785509   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:34.785619   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:34.817587   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:34.817656   59960 cri.go:89] found id: ""
	I1126 20:11:34.817682   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:34.817753   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.821524   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:34.821594   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:34.849130   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:34.849154   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:34.849159   59960 cri.go:89] found id: ""
	I1126 20:11:34.849167   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:34.849233   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.852945   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.856601   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:34.856684   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:34.883375   59960 cri.go:89] found id: ""
	I1126 20:11:34.883398   59960 logs.go:282] 0 containers: []
	W1126 20:11:34.883412   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:34.883450   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:34.883524   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:34.909798   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:34.909821   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:34.909826   59960 cri.go:89] found id: ""
	I1126 20:11:34.909834   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:34.909888   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.913552   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.916964   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:34.917033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:34.949567   59960 cri.go:89] found id: ""
	I1126 20:11:34.949592   59960 logs.go:282] 0 containers: []
	W1126 20:11:34.949601   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:34.949608   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:34.949663   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:34.977128   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:34.977150   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:34.977156   59960 cri.go:89] found id: ""
	I1126 20:11:34.977163   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:34.977220   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.981001   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.984842   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:34.984957   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:35.012427   59960 cri.go:89] found id: ""
	I1126 20:11:35.012460   59960 logs.go:282] 0 containers: []
	W1126 20:11:35.012470   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:35.012479   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:35.012493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:35.040355   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:35.040396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:35.085028   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:35.085064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:35.113614   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:35.113649   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:35.153880   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:35.153911   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:35.198643   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:35.198675   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:35.268315   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:35.268350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:35.295776   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:35.295804   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:35.376804   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:35.376847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:35.482429   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:35.482467   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:35.495585   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:35.495620   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:35.570301   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:35.562818    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.563633    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565195    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565472    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.566934    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:35.562818    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.563633    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565195    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565472    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.566934    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:35.570323   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:35.570336   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.104089   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:38.117181   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:38.117256   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:38.149986   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:38.150007   59960 cri.go:89] found id: ""
	I1126 20:11:38.150015   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:38.150071   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.153769   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:38.153836   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:38.181424   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:38.181445   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:38.181450   59960 cri.go:89] found id: ""
	I1126 20:11:38.181457   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:38.181514   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.186065   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.189965   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:38.190088   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:38.222377   59960 cri.go:89] found id: ""
	I1126 20:11:38.222403   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.222412   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:38.222418   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:38.222512   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:38.251289   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:38.251308   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:38.251312   59960 cri.go:89] found id: ""
	I1126 20:11:38.251319   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:38.251376   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.256455   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.260117   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:38.260191   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:38.285970   59960 cri.go:89] found id: ""
	I1126 20:11:38.285993   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.286001   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:38.286007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:38.286071   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:38.316333   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.316352   59960 cri.go:89] found id: ""
	I1126 20:11:38.316360   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:38.316418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.320056   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:38.320141   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:38.346321   59960 cri.go:89] found id: ""
	I1126 20:11:38.346343   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.346355   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:38.346365   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:38.346377   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:38.373397   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:38.373424   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:38.425362   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:38.425395   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.453015   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:38.453091   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:38.532623   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:38.532697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:38.633361   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:38.633397   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:38.645846   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:38.645873   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:38.703411   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:38.703444   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:38.767512   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:38.767547   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:38.796976   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:38.797004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:38.829009   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:38.829038   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:38.898466   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:38.890004    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.890695    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892444    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892921    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.894201    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:38.890004    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.890695    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892444    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892921    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.894201    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:41.398722   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:41.410132   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:41.410201   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:41.438116   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:41.438139   59960 cri.go:89] found id: ""
	I1126 20:11:41.438148   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:41.438205   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.442017   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:41.442090   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:41.469903   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:41.469958   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:41.469963   59960 cri.go:89] found id: ""
	I1126 20:11:41.469970   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:41.470027   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.474067   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.478045   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:41.478121   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:41.505356   59960 cri.go:89] found id: ""
	I1126 20:11:41.505421   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.505446   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:41.505473   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:41.505547   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:41.539013   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:41.539078   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:41.539097   59960 cri.go:89] found id: ""
	I1126 20:11:41.539120   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:41.539192   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.545082   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.548706   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:41.548780   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:41.575834   59960 cri.go:89] found id: ""
	I1126 20:11:41.575859   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.575867   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:41.575874   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:41.575934   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:41.611347   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:41.611373   59960 cri.go:89] found id: ""
	I1126 20:11:41.611381   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:41.611452   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.615789   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:41.615865   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:41.641022   59960 cri.go:89] found id: ""
	I1126 20:11:41.641047   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.641057   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:41.641066   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:41.641078   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:41.742347   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:41.742381   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:41.754134   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:41.754164   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:41.831601   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:41.821574    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.822287    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.823756    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.824699    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.826433    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:41.821574    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.822287    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.823756    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.824699    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.826433    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:41.831624   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:41.831637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:41.860096   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:41.860125   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:41.910250   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:41.910285   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:41.980123   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:41.980161   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:42.010802   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:42.010829   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:42.106028   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:42.106070   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:42.164514   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:42.164559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:42.271103   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:42.271151   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:44.839838   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:44.850546   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:44.850618   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:44.876918   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:44.876988   59960 cri.go:89] found id: ""
	I1126 20:11:44.877011   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:44.877094   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.881043   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:44.881125   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:44.911219   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:44.911239   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:44.911243   59960 cri.go:89] found id: ""
	I1126 20:11:44.911250   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:44.911304   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.914984   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.918517   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:44.918591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:44.948367   59960 cri.go:89] found id: ""
	I1126 20:11:44.948393   59960 logs.go:282] 0 containers: []
	W1126 20:11:44.948403   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:44.948410   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:44.948488   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:44.979725   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:44.979749   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:44.979762   59960 cri.go:89] found id: ""
	I1126 20:11:44.979770   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:44.979825   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.983672   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.987318   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:44.987393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:45.013302   59960 cri.go:89] found id: ""
	I1126 20:11:45.013326   59960 logs.go:282] 0 containers: []
	W1126 20:11:45.013335   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:45.013342   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:45.013400   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:45.055627   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:45.055649   59960 cri.go:89] found id: ""
	I1126 20:11:45.055657   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:45.055726   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:45.085558   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:45.085645   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:45.151023   59960 cri.go:89] found id: ""
	I1126 20:11:45.151097   59960 logs.go:282] 0 containers: []
	W1126 20:11:45.151125   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:45.151149   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:45.151189   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:45.299197   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:45.299495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:45.414522   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:45.414561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:45.426305   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:45.426334   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:45.498361   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:45.490138    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.490855    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.492369    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.493032    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.494581    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:45.490138    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.490855    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.492369    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.493032    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.494581    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:45.498385   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:45.498406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:45.544282   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:45.544315   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:45.572601   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:45.572628   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:45.618675   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:45.618704   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:45.644699   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:45.644729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:45.692766   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:45.692847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:45.768264   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:45.768298   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.298071   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:48.309786   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:48.309955   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:48.338906   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:48.338929   59960 cri.go:89] found id: ""
	I1126 20:11:48.338938   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:48.339013   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.342703   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:48.342807   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:48.373459   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:48.373483   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:48.373489   59960 cri.go:89] found id: ""
	I1126 20:11:48.373497   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:48.373571   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.377243   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.380907   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:48.380978   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:48.410171   59960 cri.go:89] found id: ""
	I1126 20:11:48.410194   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.410203   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:48.410210   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:48.410269   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:48.438118   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:48.438141   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.438146   59960 cri.go:89] found id: ""
	I1126 20:11:48.438153   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:48.438208   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.441706   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.445239   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:48.445331   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:48.471795   59960 cri.go:89] found id: ""
	I1126 20:11:48.471818   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.471827   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:48.471834   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:48.471894   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:48.499373   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:48.499444   59960 cri.go:89] found id: ""
	I1126 20:11:48.499459   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:48.499520   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.503413   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:48.503486   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:48.530399   59960 cri.go:89] found id: ""
	I1126 20:11:48.530421   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.530435   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:48.530450   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:48.530464   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:48.571849   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:48.571882   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:48.658179   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:48.658279   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:48.689018   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:48.689045   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:48.763174   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:48.763207   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:48.778567   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:48.778596   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:48.827328   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:48.827365   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.857288   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:48.857365   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:48.888507   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:48.888539   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:48.988930   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:48.988967   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:49.069225   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:49.055449    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.056233    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.057886    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.058530    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.060083    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:49.055449    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.056233    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.057886    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.058530    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.060083    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:49.069248   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:49.069262   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:51.595258   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:51.606745   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:51.606819   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:51.636395   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:51.636416   59960 cri.go:89] found id: ""
	I1126 20:11:51.636430   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:51.636488   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.640040   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:51.640115   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:51.676792   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:51.676812   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:51.676816   59960 cri.go:89] found id: ""
	I1126 20:11:51.676824   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:51.676877   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.681110   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.685068   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:51.685183   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:51.720013   59960 cri.go:89] found id: ""
	I1126 20:11:51.720038   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.720047   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:51.720054   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:51.720111   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:51.748336   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:51.748360   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:51.748375   59960 cri.go:89] found id: ""
	I1126 20:11:51.748383   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:51.748439   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.752267   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.756170   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:51.756241   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:51.783057   59960 cri.go:89] found id: ""
	I1126 20:11:51.783086   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.783095   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:51.783101   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:51.783163   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:51.811250   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:51.811272   59960 cri.go:89] found id: ""
	I1126 20:11:51.811282   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:51.811338   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.815120   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:51.815232   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:51.846026   59960 cri.go:89] found id: ""
	I1126 20:11:51.846049   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.846064   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:51.846074   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:51.846086   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:51.890348   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:51.890380   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:51.920851   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:51.920922   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:51.977107   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:51.977140   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:52.060932   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:52.060981   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:52.093050   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:52.093078   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:52.176431   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:52.176468   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:52.215980   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:52.216012   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:52.327858   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:52.327901   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:52.340252   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:52.340285   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:52.418993   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:52.410090    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.410776    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.412508    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.413095    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.414685    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:52.410090    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.410776    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.412508    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.413095    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.414685    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:52.419016   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:52.419029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:54.944539   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:54.955542   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:54.955615   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:54.986048   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:54.986074   59960 cri.go:89] found id: ""
	I1126 20:11:54.986083   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:54.986139   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:54.989757   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:54.989829   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:55.016053   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:55.016085   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:55.016091   59960 cri.go:89] found id: ""
	I1126 20:11:55.016099   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:55.016174   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.019787   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.023250   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:55.023321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:55.069450   59960 cri.go:89] found id: ""
	I1126 20:11:55.069473   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.069482   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:55.069489   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:55.069572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:55.098641   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:55.098664   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:55.098669   59960 cri.go:89] found id: ""
	I1126 20:11:55.098676   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:55.098732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.102435   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.106227   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:55.106351   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:55.138121   59960 cri.go:89] found id: ""
	I1126 20:11:55.138145   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.138154   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:55.138174   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:55.138236   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:55.167513   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:55.167544   59960 cri.go:89] found id: ""
	I1126 20:11:55.167553   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:55.167618   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.171313   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:55.171381   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:55.202786   59960 cri.go:89] found id: ""
	I1126 20:11:55.202813   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.202822   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:55.202832   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:55.202866   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:55.302444   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:55.302521   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:55.340281   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:55.340307   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:55.380642   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:55.380671   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:55.413529   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:55.413559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:55.441562   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:55.441590   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:55.518521   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:55.518561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:55.558444   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:55.558478   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:55.571280   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:55.571312   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:55.640808   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:55.631279    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.631827    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.633724    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.634294    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.636622    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:55.631279    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.631827    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.633724    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.634294    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.636622    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:55.640840   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:55.640855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:55.687489   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:55.687525   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.274871   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:58.285429   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:58.285499   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:58.313375   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:58.313399   59960 cri.go:89] found id: ""
	I1126 20:11:58.313406   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:58.313459   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.316973   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:58.317046   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:58.343195   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:58.343222   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:58.343233   59960 cri.go:89] found id: ""
	I1126 20:11:58.343241   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:58.343299   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.346903   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.350464   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:58.350532   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:58.389630   59960 cri.go:89] found id: ""
	I1126 20:11:58.389651   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.389659   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:58.389666   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:58.389727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:58.417327   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.417347   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:58.417351   59960 cri.go:89] found id: ""
	I1126 20:11:58.417358   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:58.417415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.421999   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.425800   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:58.425864   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:58.452945   59960 cri.go:89] found id: ""
	I1126 20:11:58.452969   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.452977   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:58.452983   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:58.453043   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:58.488167   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:58.488198   59960 cri.go:89] found id: ""
	I1126 20:11:58.488207   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:58.488290   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.492158   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:58.492254   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:58.519792   59960 cri.go:89] found id: ""
	I1126 20:11:58.519815   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.519824   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:58.519833   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:58.519845   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:58.539152   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:58.539178   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:58.611844   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:58.602656    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.604433    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.605264    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.606165    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.607783    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:58.602656    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.604433    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.605264    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.606165    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.607783    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:58.611916   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:58.611936   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:58.653684   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:58.653755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:58.701629   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:58.701698   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:58.797678   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:58.797712   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:58.826943   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:58.826971   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:58.870347   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:58.870382   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.935086   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:58.935124   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:58.968825   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:58.968856   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:58.997914   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:58.998030   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:01.577720   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:01.589568   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:01.589642   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:01.621435   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:01.621457   59960 cri.go:89] found id: ""
	I1126 20:12:01.621466   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:01.621521   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.625557   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:01.625630   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:01.653424   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:01.653447   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:01.653452   59960 cri.go:89] found id: ""
	I1126 20:12:01.653459   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:01.653520   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.658113   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.663163   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:01.663279   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:01.690617   59960 cri.go:89] found id: ""
	I1126 20:12:01.690692   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.690707   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:01.690714   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:01.690776   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:01.721669   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:01.721691   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:01.721696   59960 cri.go:89] found id: ""
	I1126 20:12:01.721705   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:01.721760   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.725774   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.729528   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:01.729608   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:01.755428   59960 cri.go:89] found id: ""
	I1126 20:12:01.755452   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.755461   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:01.755468   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:01.755529   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:01.783818   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:01.783841   59960 cri.go:89] found id: ""
	I1126 20:12:01.783849   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:01.783905   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.787656   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:01.787726   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:01.815958   59960 cri.go:89] found id: ""
	I1126 20:12:01.816025   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.816050   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:01.816067   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:01.816080   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:01.867560   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:01.867592   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:01.932205   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:01.932256   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:02.002408   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:02.002441   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:02.051577   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:02.051612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:02.088918   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:02.088948   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:02.168080   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:02.158735    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.159253    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162045    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162706    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.164462    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:02.158735    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.159253    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162045    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162706    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.164462    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:02.168105   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:02.168119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:02.244385   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:02.244435   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:02.282263   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:02.282293   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:02.383774   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:02.383810   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:02.399682   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:02.399712   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:04.928429   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:04.939418   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:04.939502   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:04.967318   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:04.967344   59960 cri.go:89] found id: ""
	I1126 20:12:04.967352   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:04.967406   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:04.971172   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:04.971242   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:04.998636   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:04.998660   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:04.998666   59960 cri.go:89] found id: ""
	I1126 20:12:04.998673   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:04.998728   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.002734   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.006234   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:05.006304   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:05.031905   59960 cri.go:89] found id: ""
	I1126 20:12:05.031931   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.031948   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:05.031954   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:05.032022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:05.062024   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:05.062047   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:05.062053   59960 cri.go:89] found id: ""
	I1126 20:12:05.062061   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:05.062119   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.066633   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.070769   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:05.070894   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:05.098088   59960 cri.go:89] found id: ""
	I1126 20:12:05.098113   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.098123   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:05.098130   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:05.098213   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:05.131371   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:05.131394   59960 cri.go:89] found id: ""
	I1126 20:12:05.131403   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:05.131477   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.135270   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:05.135372   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:05.162342   59960 cri.go:89] found id: ""
	I1126 20:12:05.162365   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.162374   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:05.162383   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:05.162395   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:05.235501   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:05.227170    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.227750    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229253    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229720    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.231198    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:05.227170    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.227750    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229253    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229720    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.231198    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:05.235522   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:05.235536   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:05.263102   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:05.263128   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:05.302111   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:05.302144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:05.333187   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:05.333216   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:05.359477   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:05.359505   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:05.438760   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:05.438798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:05.451777   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:05.451807   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:05.498508   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:05.498543   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:05.568808   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:05.568843   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:05.616879   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:05.616909   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:08.220414   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:08.231126   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:08.231199   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:08.258035   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:08.258105   59960 cri.go:89] found id: ""
	I1126 20:12:08.258125   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:08.258192   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.262176   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:08.262249   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:08.289710   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:08.289733   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:08.289739   59960 cri.go:89] found id: ""
	I1126 20:12:08.289750   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:08.289805   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.293485   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.297802   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:08.297880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:08.327209   59960 cri.go:89] found id: ""
	I1126 20:12:08.327234   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.327243   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:08.327263   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:08.327336   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:08.357819   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:08.357840   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:08.357845   59960 cri.go:89] found id: ""
	I1126 20:12:08.357852   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:08.357906   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.361705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.365237   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:08.365328   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:08.394319   59960 cri.go:89] found id: ""
	I1126 20:12:08.394383   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.394399   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:08.394406   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:08.394480   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:08.420463   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:08.420527   59960 cri.go:89] found id: ""
	I1126 20:12:08.420553   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:08.420638   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.424335   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:08.424450   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:08.452961   59960 cri.go:89] found id: ""
	I1126 20:12:08.452986   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.452995   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:08.453003   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:08.453014   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:08.493988   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:08.494022   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:08.544465   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:08.544499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:08.574385   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:08.574413   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:08.586334   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:08.586371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:08.667454   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:08.650997    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.659303    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.660307    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662037    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662374    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:08.650997    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.659303    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.660307    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662037    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662374    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:08.667486   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:08.667499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:08.699349   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:08.699378   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:08.764949   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:08.764985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:08.796757   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:08.796785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:08.880624   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:08.880660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:08.914640   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:08.914667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:11.513808   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:11.524482   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:11.524580   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:11.558859   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:11.558902   59960 cri.go:89] found id: ""
	I1126 20:12:11.558911   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:11.558970   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.562673   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:11.562747   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:11.588932   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:11.588951   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:11.588956   59960 cri.go:89] found id: ""
	I1126 20:12:11.588963   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:11.589017   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.592810   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.596570   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:11.596643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:11.623065   59960 cri.go:89] found id: ""
	I1126 20:12:11.623145   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.623161   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:11.623169   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:11.623229   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:11.650581   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:11.650605   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:11.650610   59960 cri.go:89] found id: ""
	I1126 20:12:11.650618   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:11.650671   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.655559   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.659747   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:11.659817   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:11.687296   59960 cri.go:89] found id: ""
	I1126 20:12:11.687322   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.687331   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:11.687337   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:11.687396   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:11.720511   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:11.720579   59960 cri.go:89] found id: ""
	I1126 20:12:11.720617   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:11.720708   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.724437   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:11.724506   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:11.749548   59960 cri.go:89] found id: ""
	I1126 20:12:11.749582   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.749591   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:11.749601   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:11.749612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:11.844417   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:11.844451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:11.856841   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:11.856870   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:11.927039   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:11.919031    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.919434    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921013    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921770    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.923409    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:11.919031    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.919434    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921013    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921770    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.923409    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:11.927072   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:11.927085   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:11.952749   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:11.952778   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:11.979828   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:11.979854   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:12.054969   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:12.055007   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:12.096829   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:12.096861   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:12.139040   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:12.139073   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:12.188630   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:12.188665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:12.261491   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:12.261525   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:14.793314   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:14.805690   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:14.805792   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:14.834480   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:14.834550   59960 cri.go:89] found id: ""
	I1126 20:12:14.834563   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:14.834624   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.838451   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:14.838546   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:14.865258   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:14.865280   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:14.865288   59960 cri.go:89] found id: ""
	I1126 20:12:14.865296   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:14.865369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.869042   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.872598   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:14.872673   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:14.899453   59960 cri.go:89] found id: ""
	I1126 20:12:14.899475   59960 logs.go:282] 0 containers: []
	W1126 20:12:14.899484   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:14.899491   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:14.899553   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:14.927802   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:14.927830   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:14.927837   59960 cri.go:89] found id: ""
	I1126 20:12:14.927845   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:14.927940   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.932558   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.936133   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:14.936204   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:14.961102   59960 cri.go:89] found id: ""
	I1126 20:12:14.961173   59960 logs.go:282] 0 containers: []
	W1126 20:12:14.961195   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:14.961215   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:14.961302   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:15.002363   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:15.002384   59960 cri.go:89] found id: ""
	I1126 20:12:15.002393   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:15.002447   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:15.006142   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:15.006212   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:15.032134   59960 cri.go:89] found id: ""
	I1126 20:12:15.032199   59960 logs.go:282] 0 containers: []
	W1126 20:12:15.032214   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:15.032224   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:15.032240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:15.081347   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:15.081379   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:15.180623   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:15.180658   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:15.209901   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:15.209962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:15.262607   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:15.262636   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:15.288510   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:15.288544   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:15.367680   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:15.367714   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:15.412204   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:15.412231   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:15.424270   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:15.424300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:15.503073   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:15.494667    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.495283    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.496993    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.497515    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.498972    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:15.494667    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.495283    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.496993    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.497515    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.498972    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:15.503139   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:15.503167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:15.550262   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:15.550296   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.118444   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:18.129864   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:18.129981   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:18.156819   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:18.156838   59960 cri.go:89] found id: ""
	I1126 20:12:18.156846   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:18.156904   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.161071   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:18.161149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:18.189616   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:18.189639   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:18.189644   59960 cri.go:89] found id: ""
	I1126 20:12:18.189651   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:18.189705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.193599   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.197622   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:18.197702   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:18.229000   59960 cri.go:89] found id: ""
	I1126 20:12:18.229024   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.229034   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:18.229041   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:18.229097   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:18.258704   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.258728   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:18.258734   59960 cri.go:89] found id: ""
	I1126 20:12:18.258741   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:18.258799   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.262617   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.266630   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:18.266703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:18.294498   59960 cri.go:89] found id: ""
	I1126 20:12:18.294520   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.294528   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:18.294535   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:18.294592   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:18.321461   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:18.321534   59960 cri.go:89] found id: ""
	I1126 20:12:18.321556   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:18.321645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.325350   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:18.325460   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:18.351492   59960 cri.go:89] found id: ""
	I1126 20:12:18.351553   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.351579   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:18.351599   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:18.351637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:18.407171   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:18.407205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:18.439080   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:18.439112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:18.547958   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:18.547995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:18.619721   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:18.609846    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.610654    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612119    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612768    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.614366    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:18.609846    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.610654    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612119    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612768    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.614366    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:18.619742   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:18.619754   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:18.645098   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:18.645177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:18.682606   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:18.682639   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.763422   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:18.763453   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:18.795735   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:18.795762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:18.822004   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:18.822035   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:18.896691   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:18.896727   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:21.410083   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:21.420840   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:21.420938   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:21.446994   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:21.447016   59960 cri.go:89] found id: ""
	I1126 20:12:21.447024   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:21.447102   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.450650   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:21.450721   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:21.479530   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:21.479554   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:21.479559   59960 cri.go:89] found id: ""
	I1126 20:12:21.479566   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:21.479639   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.483856   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.487301   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:21.487396   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:21.514632   59960 cri.go:89] found id: ""
	I1126 20:12:21.514655   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.514664   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:21.514677   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:21.514734   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:21.552676   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:21.552697   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:21.552701   59960 cri.go:89] found id: ""
	I1126 20:12:21.552708   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:21.552764   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.558562   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.562503   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:21.562570   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:21.592027   59960 cri.go:89] found id: ""
	I1126 20:12:21.592051   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.592059   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:21.592065   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:21.592122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:21.622050   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:21.622069   59960 cri.go:89] found id: ""
	I1126 20:12:21.622078   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:21.622133   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.625979   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:21.626057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:21.659506   59960 cri.go:89] found id: ""
	I1126 20:12:21.659530   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.659539   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:21.659548   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:21.659561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:21.692379   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:21.692406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:21.765021   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:21.765055   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:21.839116   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:21.830975    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.831759    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833349    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833904    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.835476    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:21.830975    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.831759    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833349    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833904    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.835476    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:21.839140   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:21.839153   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:21.865386   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:21.865413   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:21.904223   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:21.904257   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:21.949513   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:21.949545   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:21.975811   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:21.975838   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:22.009804   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:22.009830   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:22.114067   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:22.114107   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:22.129823   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:22.129850   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:24.699777   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:24.710717   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:24.710835   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:24.737361   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:24.737395   59960 cri.go:89] found id: ""
	I1126 20:12:24.737404   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:24.737467   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.741100   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:24.741181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:24.766942   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:24.767005   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:24.767023   59960 cri.go:89] found id: ""
	I1126 20:12:24.767038   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:24.767117   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.771423   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.775599   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:24.775679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:24.807211   59960 cri.go:89] found id: ""
	I1126 20:12:24.807238   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.807247   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:24.807254   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:24.807313   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:24.839448   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:24.839474   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:24.839480   59960 cri.go:89] found id: ""
	I1126 20:12:24.839487   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:24.839543   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.843345   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.846785   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:24.846859   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:24.875974   59960 cri.go:89] found id: ""
	I1126 20:12:24.875999   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.876008   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:24.876015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:24.876074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:24.904623   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:24.904646   59960 cri.go:89] found id: ""
	I1126 20:12:24.904655   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:24.904729   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.908536   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:24.908626   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:24.937367   59960 cri.go:89] found id: ""
	I1126 20:12:24.937448   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.937471   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:24.937494   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:24.937534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:24.976827   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:24.976864   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:25.024594   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:25.024629   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:25.103663   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:25.103701   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:25.184899   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:25.184934   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:25.288663   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:25.288696   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:25.303312   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:25.303340   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:25.371319   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:25.361818    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.362509    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.364256    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.365013    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.366870    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:25.361818    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.362509    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.364256    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.365013    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.366870    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:25.371342   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:25.371357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:25.399886   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:25.399954   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:25.431130   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:25.431162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:25.457679   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:25.457758   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:27.990400   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:28.001290   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:28.001359   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:28.027402   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:28.027424   59960 cri.go:89] found id: ""
	I1126 20:12:28.027441   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:28.027501   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.030992   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:28.031083   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:28.072993   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:28.073014   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:28.073019   59960 cri.go:89] found id: ""
	I1126 20:12:28.073026   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:28.073084   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.076846   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.080628   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:28.080762   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:28.107876   59960 cri.go:89] found id: ""
	I1126 20:12:28.107902   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.107911   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:28.107918   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:28.107993   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:28.135277   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:28.135299   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:28.135305   59960 cri.go:89] found id: ""
	I1126 20:12:28.135312   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:28.135369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.139340   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.143115   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:28.143193   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:28.179129   59960 cri.go:89] found id: ""
	I1126 20:12:28.179230   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.179259   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:28.179273   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:28.179346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:28.208432   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:28.208453   59960 cri.go:89] found id: ""
	I1126 20:12:28.208465   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:28.208523   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.212104   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:28.212174   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:28.239214   59960 cri.go:89] found id: ""
	I1126 20:12:28.239290   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.239307   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:28.239317   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:28.239331   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:28.311306   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:28.311342   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:28.340943   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:28.340972   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:28.376088   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:28.376113   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:28.447578   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:28.440425    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.440837    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442342    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442644    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.444078    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:28.440425    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.440837    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442342    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442644    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.444078    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:28.447601   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:28.447613   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:28.494672   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:28.494707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:28.524817   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:28.524847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:28.611534   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:28.611568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:28.717586   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:28.717621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:28.729869   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:28.729894   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:28.755777   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:28.755805   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.304943   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:31.316121   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:31.316189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:31.344914   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:31.344936   59960 cri.go:89] found id: ""
	I1126 20:12:31.344945   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:31.345000   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.348636   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:31.348708   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:31.376592   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.376614   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:31.376623   59960 cri.go:89] found id: ""
	I1126 20:12:31.376630   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:31.376683   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.380757   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.384468   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:31.384545   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:31.415544   59960 cri.go:89] found id: ""
	I1126 20:12:31.415570   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.415579   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:31.415586   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:31.415646   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:31.441604   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:31.441680   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:31.441699   59960 cri.go:89] found id: ""
	I1126 20:12:31.441723   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:31.441808   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.445590   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.449159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:31.449233   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:31.475467   59960 cri.go:89] found id: ""
	I1126 20:12:31.475492   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.475501   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:31.475507   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:31.475567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:31.505974   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:31.505995   59960 cri.go:89] found id: ""
	I1126 20:12:31.506004   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:31.506068   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.510913   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:31.510988   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:31.555870   59960 cri.go:89] found id: ""
	I1126 20:12:31.555901   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.555911   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:31.555920   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:31.555932   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:31.569317   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:31.569396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:31.639071   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:31.630335    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.631132    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.632992    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.633425    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.635012    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:31.630335    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.631132    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.632992    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.633425    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.635012    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:31.639141   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:31.639171   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:31.685122   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:31.685156   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:31.715735   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:31.715763   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:31.744469   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:31.744499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.782788   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:31.782822   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:31.854784   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:31.854820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:31.883960   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:31.883989   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:31.968197   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:31.968235   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:32.000618   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:32.000646   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:34.599812   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:34.610580   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:34.610690   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:34.643812   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:34.643835   59960 cri.go:89] found id: ""
	I1126 20:12:34.643844   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:34.643902   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.647819   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:34.647891   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:34.681825   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:34.681849   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:34.681855   59960 cri.go:89] found id: ""
	I1126 20:12:34.681863   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:34.681959   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.685589   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.689208   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:34.689280   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:34.719704   59960 cri.go:89] found id: ""
	I1126 20:12:34.719727   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.719736   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:34.719743   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:34.719802   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:34.745609   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:34.745632   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:34.745639   59960 cri.go:89] found id: ""
	I1126 20:12:34.745646   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:34.745704   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.749369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.752915   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:34.752982   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:34.778956   59960 cri.go:89] found id: ""
	I1126 20:12:34.778982   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.778996   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:34.779003   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:34.779059   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:34.805123   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:34.805146   59960 cri.go:89] found id: ""
	I1126 20:12:34.805153   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:34.805211   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.808760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:34.808834   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:34.834427   59960 cri.go:89] found id: ""
	I1126 20:12:34.834452   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.834462   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:34.834471   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:34.834482   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:34.912760   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:34.912792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:35.015751   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:35.015790   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:35.046216   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:35.046291   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:35.092725   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:35.092760   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:35.163096   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:35.163130   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:35.191405   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:35.191488   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:35.227181   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:35.227213   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:35.240889   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:35.240922   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:35.311849   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:35.302602    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.303934    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.304899    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.306705    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.307280    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:35.302602    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.303934    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.304899    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.306705    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.307280    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:35.311871   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:35.311884   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:35.356916   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:35.356951   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:37.883250   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:37.894052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:37.894122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:37.924918   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:37.924943   59960 cri.go:89] found id: ""
	I1126 20:12:37.924956   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:37.925020   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.928865   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:37.928940   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:37.961907   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:37.961958   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:37.961964   59960 cri.go:89] found id: ""
	I1126 20:12:37.961971   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:37.962035   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.965843   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.969339   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:37.969409   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:37.995343   59960 cri.go:89] found id: ""
	I1126 20:12:37.995373   59960 logs.go:282] 0 containers: []
	W1126 20:12:37.995381   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:37.995388   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:37.995491   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:38.022312   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:38.022334   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:38.022339   59960 cri.go:89] found id: ""
	I1126 20:12:38.022346   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:38.022413   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.026080   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.029533   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:38.029622   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:38.060280   59960 cri.go:89] found id: ""
	I1126 20:12:38.060307   59960 logs.go:282] 0 containers: []
	W1126 20:12:38.060346   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:38.060368   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:38.060437   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:38.091248   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:38.091312   59960 cri.go:89] found id: ""
	I1126 20:12:38.091327   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:38.091425   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.095836   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:38.095914   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:38.125378   59960 cri.go:89] found id: ""
	I1126 20:12:38.125403   59960 logs.go:282] 0 containers: []
	W1126 20:12:38.125413   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:38.125422   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:38.125436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:38.151847   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:38.151875   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:38.202356   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:38.202391   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:38.247650   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:38.247725   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:38.275709   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:38.275736   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:38.307514   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:38.307542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:38.404957   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:38.404994   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:38.491924   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:38.491962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:38.521423   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:38.521460   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:38.598021   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:38.598053   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:38.610973   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:38.611004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:38.687841   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:38.679705   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.680686   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.681793   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.682498   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.684162   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:38.679705   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.680686   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.681793   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.682498   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.684162   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:41.188401   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:41.199011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:41.199080   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:41.227170   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:41.227196   59960 cri.go:89] found id: ""
	I1126 20:12:41.227205   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:41.227260   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.230873   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:41.230945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:41.257484   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:41.257506   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:41.257522   59960 cri.go:89] found id: ""
	I1126 20:12:41.257529   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:41.257584   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.261286   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.265036   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:41.265101   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:41.290579   59960 cri.go:89] found id: ""
	I1126 20:12:41.290645   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.290669   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:41.290682   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:41.290741   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:41.319766   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:41.319786   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:41.319791   59960 cri.go:89] found id: ""
	I1126 20:12:41.319799   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:41.319859   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.323637   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.327077   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:41.327177   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:41.356676   59960 cri.go:89] found id: ""
	I1126 20:12:41.356702   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.356711   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:41.356719   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:41.356783   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:41.385771   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:41.385790   59960 cri.go:89] found id: ""
	I1126 20:12:41.385798   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:41.385852   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.389446   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:41.389544   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:41.416642   59960 cri.go:89] found id: ""
	I1126 20:12:41.416710   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.416732   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:41.416754   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:41.416788   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:41.482246   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:41.473419   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.474136   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.475824   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.476403   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.478152   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:41.473419   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.474136   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.475824   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.476403   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.478152   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:41.482311   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:41.482339   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:41.509950   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:41.510016   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:41.557291   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:41.557324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:41.584211   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:41.584240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:41.666177   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:41.666212   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:41.767334   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:41.767369   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:41.781064   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:41.781089   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:41.825285   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:41.825321   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:41.892538   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:41.892573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:41.920754   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:41.920785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:44.468280   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:44.479465   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:44.479546   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:44.507592   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:44.507615   59960 cri.go:89] found id: ""
	I1126 20:12:44.507623   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:44.507679   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.511422   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:44.511510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:44.543146   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:44.543169   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:44.543174   59960 cri.go:89] found id: ""
	I1126 20:12:44.543181   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:44.543251   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.547022   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.550639   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:44.550719   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:44.579025   59960 cri.go:89] found id: ""
	I1126 20:12:44.579054   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.579063   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:44.579070   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:44.579139   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:44.611309   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:44.611332   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:44.611336   59960 cri.go:89] found id: ""
	I1126 20:12:44.611344   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:44.611407   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.615332   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.619108   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:44.619183   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:44.645161   59960 cri.go:89] found id: ""
	I1126 20:12:44.645185   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.645194   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:44.645201   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:44.645257   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:44.684280   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:44.684301   59960 cri.go:89] found id: ""
	I1126 20:12:44.684310   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:44.684364   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.687985   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:44.688057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:44.713170   59960 cri.go:89] found id: ""
	I1126 20:12:44.713193   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.713202   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:44.713211   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:44.713225   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:44.790764   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:44.782647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.783505   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785179   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785579   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.787022   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:44.782647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.783505   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785179   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785579   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.787022   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:44.790787   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:44.790801   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:44.841911   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:44.842082   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:44.886124   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:44.886155   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:44.956783   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:44.956817   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:44.992805   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:44.992834   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:45.021163   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:45.021190   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:45.060873   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:45.061452   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:45.201027   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:45.201119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:45.266419   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:45.266547   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:45.415986   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:45.416024   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:47.928674   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:47.940771   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:47.940843   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:47.966175   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:47.966194   59960 cri.go:89] found id: ""
	I1126 20:12:47.966202   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:47.966254   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:47.969908   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:47.970011   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:47.997001   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:47.997027   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:47.997032   59960 cri.go:89] found id: ""
	I1126 20:12:47.997040   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:47.997096   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.001757   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.005881   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:48.005980   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:48.031565   59960 cri.go:89] found id: ""
	I1126 20:12:48.031587   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.031595   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:48.031602   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:48.031660   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:48.063357   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:48.063380   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:48.063386   59960 cri.go:89] found id: ""
	I1126 20:12:48.063393   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:48.063450   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.068044   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.073135   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:48.073260   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:48.103364   59960 cri.go:89] found id: ""
	I1126 20:12:48.103391   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.103401   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:48.103408   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:48.103511   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:48.134700   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:48.134720   59960 cri.go:89] found id: ""
	I1126 20:12:48.134728   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:48.134795   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.138489   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:48.138568   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:48.164615   59960 cri.go:89] found id: ""
	I1126 20:12:48.164639   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.164648   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:48.164657   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:48.164670   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:48.238206   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:48.238245   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:48.270325   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:48.270352   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:48.316632   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:48.316660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:48.328526   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:48.328554   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:48.370051   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:48.370081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:48.397236   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:48.397264   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:48.478994   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:48.479029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:48.586134   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:48.586167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:48.661172   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:48.650880   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.652436   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.653061   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.654717   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.655290   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:48.650880   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.652436   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.653061   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.654717   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.655290   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:48.661195   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:48.661211   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:48.689769   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:48.689797   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.235721   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:51.246961   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:51.247038   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:51.276386   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:51.276410   59960 cri.go:89] found id: ""
	I1126 20:12:51.276419   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:51.276472   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.280282   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:51.280363   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:51.307844   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:51.307875   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.307880   59960 cri.go:89] found id: ""
	I1126 20:12:51.307888   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:51.307944   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.311885   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.315516   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:51.315643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:51.343040   59960 cri.go:89] found id: ""
	I1126 20:12:51.343068   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.343077   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:51.343084   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:51.343144   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:51.371879   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:51.371901   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:51.371907   59960 cri.go:89] found id: ""
	I1126 20:12:51.371920   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:51.371976   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.375815   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.379444   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:51.379518   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:51.409590   59960 cri.go:89] found id: ""
	I1126 20:12:51.409615   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.409624   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:51.409630   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:51.409688   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:51.440665   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:51.440692   59960 cri.go:89] found id: ""
	I1126 20:12:51.440701   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:51.440756   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.444486   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:51.444565   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:51.470661   59960 cri.go:89] found id: ""
	I1126 20:12:51.470686   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.470695   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:51.470705   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:51.470749   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:51.482794   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:51.482823   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:51.570460   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:51.561457   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.562296   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.563970   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.564288   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.566409   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:51.561457   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.562296   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.563970   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.564288   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.566409   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:51.570484   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:51.570498   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:51.596696   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:51.596724   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:51.657780   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:51.657820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:51.736300   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:51.736338   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:51.772635   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:51.772664   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:51.808014   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:51.808042   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:51.909775   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:51.909814   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.955849   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:51.955887   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:51.986011   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:51.986040   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:54.569991   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:54.582000   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:54.582074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:54.610486   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:54.610506   59960 cri.go:89] found id: ""
	I1126 20:12:54.610515   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:54.610573   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.614711   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:54.614787   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:54.641548   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:54.641571   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:54.641577   59960 cri.go:89] found id: ""
	I1126 20:12:54.641584   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:54.641645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.645430   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.649375   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:54.649465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:54.677350   59960 cri.go:89] found id: ""
	I1126 20:12:54.677377   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.677386   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:54.677399   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:54.677456   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:54.706226   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:54.706249   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:54.706254   59960 cri.go:89] found id: ""
	I1126 20:12:54.706261   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:54.706315   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.710188   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.713666   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:54.713759   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:54.745132   59960 cri.go:89] found id: ""
	I1126 20:12:54.745158   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.745167   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:54.745174   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:54.745235   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:54.774016   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:54.774039   59960 cri.go:89] found id: ""
	I1126 20:12:54.774047   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:54.774105   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.778220   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:54.778293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:54.807768   59960 cri.go:89] found id: ""
	I1126 20:12:54.807831   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.807845   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:54.807855   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:54.807867   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:54.904620   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:54.904657   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:54.931520   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:54.931548   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:54.974322   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:54.974360   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:55.010146   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:55.010176   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:55.044963   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:55.045006   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:55.060490   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:55.060520   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:55.132694   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:55.124286   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.124937   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.126610   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.127207   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.128929   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:55.124286   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.124937   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.126610   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.127207   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.128929   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:55.132729   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:55.132746   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:55.180103   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:55.180139   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:55.258117   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:55.258154   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:55.289687   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:55.289716   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:57.870076   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:57.881883   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:57.881978   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:57.911809   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:57.911833   59960 cri.go:89] found id: ""
	I1126 20:12:57.911841   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:57.911899   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.915590   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:57.915685   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:57.943647   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:57.943671   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:57.943677   59960 cri.go:89] found id: ""
	I1126 20:12:57.943684   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:57.943747   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.947699   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.951409   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:57.951489   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:57.979114   59960 cri.go:89] found id: ""
	I1126 20:12:57.979138   59960 logs.go:282] 0 containers: []
	W1126 20:12:57.979147   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:57.979154   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:57.979214   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:58.009760   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:58.009781   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:58.009787   59960 cri.go:89] found id: ""
	I1126 20:12:58.009794   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:58.009855   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.013598   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.017135   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:58.017207   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:58.047222   59960 cri.go:89] found id: ""
	I1126 20:12:58.047247   59960 logs.go:282] 0 containers: []
	W1126 20:12:58.047255   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:58.047262   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:58.047324   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:58.094431   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:58.094510   59960 cri.go:89] found id: ""
	I1126 20:12:58.094524   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:58.094586   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.099004   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:58.099099   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:58.126698   59960 cri.go:89] found id: ""
	I1126 20:12:58.126727   59960 logs.go:282] 0 containers: []
	W1126 20:12:58.126735   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:58.126744   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:58.126756   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:58.155602   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:58.155629   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:58.196131   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:58.196166   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:58.243760   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:58.243793   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:58.314546   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:58.314583   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:58.347422   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:58.347451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:58.373247   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:58.373277   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:58.448488   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:58.448524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:58.480586   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:58.480615   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:58.586743   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:58.586799   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:58.600003   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:58.600029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:58.682648   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:58.673481   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.674315   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.675021   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.676838   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.677737   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:58.673481   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.674315   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.675021   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.676838   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.677737   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:01.183502   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:01.195046   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:01.195153   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:01.224257   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:01.224281   59960 cri.go:89] found id: ""
	I1126 20:13:01.224289   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:01.224365   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.228134   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:01.228206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:01.265990   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:01.266014   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:01.266019   59960 cri.go:89] found id: ""
	I1126 20:13:01.266027   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:01.266084   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.270682   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.274505   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:01.274580   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:01.302962   59960 cri.go:89] found id: ""
	I1126 20:13:01.302989   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.302998   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:01.303005   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:01.303072   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:01.335599   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:01.335621   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:01.335627   59960 cri.go:89] found id: ""
	I1126 20:13:01.335635   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:01.335689   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.339621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.343531   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:01.343614   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:01.369553   59960 cri.go:89] found id: ""
	I1126 20:13:01.369578   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.369588   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:01.369594   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:01.369657   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:01.402170   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:01.402197   59960 cri.go:89] found id: ""
	I1126 20:13:01.402205   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:01.402266   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.406260   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:01.406336   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:01.432250   59960 cri.go:89] found id: ""
	I1126 20:13:01.432326   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.432352   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:01.432362   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:01.432378   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:01.473457   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:01.473491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:01.525391   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:01.525445   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:01.557734   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:01.557765   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:01.650427   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:01.650465   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:01.696040   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:01.696070   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:01.801258   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:01.801297   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:01.872498   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:01.872534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:01.912672   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:01.912725   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:01.927976   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:01.928008   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:02.002577   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:01.992139   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.993221   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.994589   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996153   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996915   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:01.992139   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.993221   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.994589   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996153   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996915   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:02.002601   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:02.002614   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:04.532051   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:04.544501   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:04.544572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:04.571414   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:04.571435   59960 cri.go:89] found id: ""
	I1126 20:13:04.571443   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:04.571494   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.575072   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:04.575149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:04.603292   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:04.603312   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:04.603316   59960 cri.go:89] found id: ""
	I1126 20:13:04.603326   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:04.603378   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.607479   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.610889   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:04.610970   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:04.636626   59960 cri.go:89] found id: ""
	I1126 20:13:04.636652   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.636662   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:04.636668   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:04.636745   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:04.665487   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:04.665511   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:04.665516   59960 cri.go:89] found id: ""
	I1126 20:13:04.665523   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:04.665599   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.669516   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.673155   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:04.673221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:04.705848   59960 cri.go:89] found id: ""
	I1126 20:13:04.705873   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.705882   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:04.705888   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:04.705971   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:04.741254   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:04.741277   59960 cri.go:89] found id: ""
	I1126 20:13:04.741285   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:04.741340   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.745396   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:04.745469   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:04.777680   59960 cri.go:89] found id: ""
	I1126 20:13:04.777713   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.777723   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:04.777732   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:04.777744   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:04.884972   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:04.885008   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:04.898040   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:04.898066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:04.971530   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:04.971610   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:05.003493   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:05.003573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:05.082481   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:05.082515   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:05.116089   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:05.116119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:05.186979   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:05.178888   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.179664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181297   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.183205   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:05.178888   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.179664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181297   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.183205   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:05.187006   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:05.187020   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:05.214669   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:05.214698   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:05.261207   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:05.261238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:05.306449   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:05.306482   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:07.838042   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:07.850498   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:07.850567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:07.878108   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:07.878130   59960 cri.go:89] found id: ""
	I1126 20:13:07.878138   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:07.878197   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.882580   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:07.882654   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:07.911855   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:07.911886   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:07.911891   59960 cri.go:89] found id: ""
	I1126 20:13:07.911899   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:07.911960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.915705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.919300   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:07.919371   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:07.951018   59960 cri.go:89] found id: ""
	I1126 20:13:07.951044   59960 logs.go:282] 0 containers: []
	W1126 20:13:07.951053   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:07.951059   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:07.951119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:07.978929   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:07.978951   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:07.978956   59960 cri.go:89] found id: ""
	I1126 20:13:07.978963   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:07.979017   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.983189   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.986830   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:07.986903   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:08.016199   59960 cri.go:89] found id: ""
	I1126 20:13:08.016231   59960 logs.go:282] 0 containers: []
	W1126 20:13:08.016240   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:08.016251   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:08.016325   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:08.053456   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:08.053528   59960 cri.go:89] found id: ""
	I1126 20:13:08.053549   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:08.053644   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:08.057986   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:08.058066   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:08.087479   59960 cri.go:89] found id: ""
	I1126 20:13:08.087508   59960 logs.go:282] 0 containers: []
	W1126 20:13:08.087517   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:08.087533   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:08.087546   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:08.132468   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:08.132502   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:08.176740   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:08.176778   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:08.250131   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:08.250178   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:08.280307   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:08.280337   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:08.310477   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:08.310506   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:08.413610   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:08.413648   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:08.484512   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:08.474848   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.476074   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.477530   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.478182   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.479748   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:08.474848   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.476074   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.477530   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.478182   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.479748   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:08.484538   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:08.484551   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:08.561138   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:08.561172   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:08.596362   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:08.596439   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:08.609838   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:08.609909   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.136633   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:11.147922   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:11.148007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:11.179880   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.179915   59960 cri.go:89] found id: ""
	I1126 20:13:11.179923   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:11.180040   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.184887   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:11.184958   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:11.213848   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:11.213872   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:11.213878   59960 cri.go:89] found id: ""
	I1126 20:13:11.213885   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:11.213981   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.217804   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.221572   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:11.221649   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:11.258706   59960 cri.go:89] found id: ""
	I1126 20:13:11.258783   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.258799   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:11.258806   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:11.258880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:11.289663   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:11.289686   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:11.289692   59960 cri.go:89] found id: ""
	I1126 20:13:11.289699   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:11.289755   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.293522   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.298425   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:11.298504   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:11.325442   59960 cri.go:89] found id: ""
	I1126 20:13:11.325508   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.325534   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:11.325552   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:11.325636   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:11.352745   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:11.352808   59960 cri.go:89] found id: ""
	I1126 20:13:11.352834   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:11.352923   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.356710   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:11.356824   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:11.384378   59960 cri.go:89] found id: ""
	I1126 20:13:11.384402   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.384412   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:11.384421   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:11.384433   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:11.396869   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:11.396938   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:11.467278   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:11.459180   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.459948   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.461472   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.462000   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.463589   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:11.459180   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.459948   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.461472   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.462000   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.463589   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:11.467302   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:11.467316   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.494598   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:11.494626   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:11.533337   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:11.533372   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:11.559364   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:11.559392   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:11.642834   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:11.642873   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:11.680367   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:11.680393   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:11.784039   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:11.784075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:11.834225   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:11.834260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:11.905094   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:11.905129   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.439226   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:14.451155   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:14.451245   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:14.493752   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:14.493776   59960 cri.go:89] found id: ""
	I1126 20:13:14.493784   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:14.493840   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.497504   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:14.497627   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:14.524624   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:14.524646   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:14.524652   59960 cri.go:89] found id: ""
	I1126 20:13:14.524659   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:14.524743   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.528418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.532417   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:14.532512   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:14.559402   59960 cri.go:89] found id: ""
	I1126 20:13:14.559477   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.559491   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:14.559498   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:14.559556   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:14.588825   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:14.588848   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.588853   59960 cri.go:89] found id: ""
	I1126 20:13:14.588860   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:14.588921   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.593022   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.596763   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:14.596831   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:14.624835   59960 cri.go:89] found id: ""
	I1126 20:13:14.624858   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.624867   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:14.624874   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:14.624929   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:14.650771   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:14.650846   59960 cri.go:89] found id: ""
	I1126 20:13:14.650872   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:14.650960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.656095   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:14.656219   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:14.682420   59960 cri.go:89] found id: ""
	I1126 20:13:14.682493   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.682517   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:14.682540   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:14.682581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:14.722936   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:14.722971   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.754105   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:14.754134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:14.786128   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:14.786156   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:14.798341   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:14.798370   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:14.873270   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:14.865757   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.866349   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.867866   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.868348   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.869793   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:14.865757   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.866349   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.867866   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.868348   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.869793   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:14.873292   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:14.873306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:14.920206   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:14.920240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:14.996591   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:14.996624   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:15.024423   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:15.024451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:15.105848   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:15.105881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:15.205091   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:15.205170   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:17.734682   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:17.745326   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:17.745391   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:17.773503   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:17.773525   59960 cri.go:89] found id: ""
	I1126 20:13:17.773534   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:17.773621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.777326   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:17.777400   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:17.805117   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:17.805139   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:17.805144   59960 cri.go:89] found id: ""
	I1126 20:13:17.805151   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:17.805206   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.809065   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.812530   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:17.812601   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:17.841430   59960 cri.go:89] found id: ""
	I1126 20:13:17.841456   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.841465   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:17.841472   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:17.841530   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:17.868985   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:17.869009   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:17.869014   59960 cri.go:89] found id: ""
	I1126 20:13:17.869024   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:17.869081   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.882183   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.885701   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:17.885794   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:17.918849   59960 cri.go:89] found id: ""
	I1126 20:13:17.918872   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.918880   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:17.918887   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:17.918947   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:17.949773   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:17.949849   59960 cri.go:89] found id: ""
	I1126 20:13:17.949872   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:17.949996   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.953636   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:17.953705   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:17.980243   59960 cri.go:89] found id: ""
	I1126 20:13:17.980266   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.980275   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:17.980284   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:17.980295   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:18.011301   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:18.011331   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:18.038493   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:18.038526   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:18.080613   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:18.080641   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:18.160950   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:18.160988   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:18.262170   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:18.262215   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:18.275569   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:18.275593   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:18.351781   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:18.343534   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.344057   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.345769   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.346381   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.347931   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:18.343534   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.344057   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.345769   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.346381   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.347931   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:18.351805   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:18.351817   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:18.389344   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:18.389375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:18.434916   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:18.434949   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:18.527668   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:18.527702   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.058771   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:21.073274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:21.073339   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:21.121326   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:21.121345   59960 cri.go:89] found id: ""
	I1126 20:13:21.121356   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:21.121415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.130434   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:21.130507   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:21.164100   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:21.164161   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:21.164191   59960 cri.go:89] found id: ""
	I1126 20:13:21.164212   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:21.164289   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.168566   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.173217   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:21.173328   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:21.201882   59960 cri.go:89] found id: ""
	I1126 20:13:21.202006   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.202036   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:21.202055   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:21.202157   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:21.230033   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:21.230099   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:21.230120   59960 cri.go:89] found id: ""
	I1126 20:13:21.230144   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:21.230222   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.234188   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.238625   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:21.238709   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:21.266450   59960 cri.go:89] found id: ""
	I1126 20:13:21.266476   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.266485   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:21.266492   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:21.266567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:21.293192   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.293221   59960 cri.go:89] found id: ""
	I1126 20:13:21.293229   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:21.293320   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.297074   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:21.297146   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:21.325608   59960 cri.go:89] found id: ""
	I1126 20:13:21.325635   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.325644   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:21.325653   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:21.325665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:21.365168   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:21.365201   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:21.407809   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:21.407841   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:21.490502   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:21.490538   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:21.593562   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:21.593598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:21.620251   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:21.620280   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:21.696224   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:21.696260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:21.724295   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:21.724324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.754121   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:21.754146   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:21.785320   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:21.785347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:21.797528   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:21.797556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:21.871066   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:21.862248   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.863127   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.864832   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.865449   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.867089   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:21.862248   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.863127   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.864832   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.865449   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.867089   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:24.371542   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:24.382011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:24.382074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:24.413323   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:24.413351   59960 cri.go:89] found id: ""
	I1126 20:13:24.413360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:24.413418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.417248   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:24.417327   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:24.443549   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:24.443571   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:24.443576   59960 cri.go:89] found id: ""
	I1126 20:13:24.443583   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:24.443638   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.447448   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.450865   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:24.450933   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:24.481019   59960 cri.go:89] found id: ""
	I1126 20:13:24.481043   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.481052   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:24.481059   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:24.481119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:24.509327   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:24.509349   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:24.509354   59960 cri.go:89] found id: ""
	I1126 20:13:24.509361   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:24.509416   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.512867   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.516116   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:24.516181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:24.546284   59960 cri.go:89] found id: ""
	I1126 20:13:24.546361   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.546390   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:24.546405   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:24.546464   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:24.571968   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:24.572032   59960 cri.go:89] found id: ""
	I1126 20:13:24.572047   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:24.572113   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.575760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:24.575830   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:24.603299   59960 cri.go:89] found id: ""
	I1126 20:13:24.603325   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.603334   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:24.603373   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:24.603390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:24.642562   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:24.642595   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:24.696607   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:24.696640   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:24.724494   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:24.724523   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:24.805443   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:24.805477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:24.880673   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:24.872137   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.872936   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.874737   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.875329   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.876994   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:24.872137   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.872936   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.874737   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.875329   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.876994   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:24.880694   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:24.880708   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:24.912019   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:24.912047   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:24.998475   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:24.998511   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:25.027058   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:25.027084   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:25.060548   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:25.060577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:25.167756   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:25.167795   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:27.682279   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:27.693116   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:27.693189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:27.720687   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:27.720706   59960 cri.go:89] found id: ""
	I1126 20:13:27.720713   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:27.720765   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.724317   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:27.724388   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:27.751345   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:27.751369   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:27.751375   59960 cri.go:89] found id: ""
	I1126 20:13:27.751384   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:27.751445   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.755313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.758668   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:27.758738   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:27.788496   59960 cri.go:89] found id: ""
	I1126 20:13:27.788567   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.788592   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:27.788611   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:27.788703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:27.815714   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:27.815743   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:27.815749   59960 cri.go:89] found id: ""
	I1126 20:13:27.815757   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:27.815831   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.819360   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.822959   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:27.823038   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:27.853270   59960 cri.go:89] found id: ""
	I1126 20:13:27.853316   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.853326   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:27.853333   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:27.853403   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:27.880677   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:27.880701   59960 cri.go:89] found id: ""
	I1126 20:13:27.880710   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:27.880766   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.884425   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:27.884499   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:27.917060   59960 cri.go:89] found id: ""
	I1126 20:13:27.917126   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.917150   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:27.917183   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:27.917213   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:27.929246   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:27.929321   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:28.005492   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:27.995998   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.996970   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.999116   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.000043   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.001867   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:27.995998   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.996970   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.999116   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.000043   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.001867   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:28.005554   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:28.005581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:28.032388   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:28.032414   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:28.090244   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:28.090279   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:28.140049   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:28.140081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:28.217015   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:28.217052   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:28.252634   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:28.252663   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:28.356298   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:28.356347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:28.391198   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:28.391227   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:28.470669   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:28.470706   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:31.018712   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:31.029520   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:31.029594   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:31.067229   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:31.067249   59960 cri.go:89] found id: ""
	I1126 20:13:31.067257   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:31.067315   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.071728   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:31.071796   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:31.100937   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:31.101015   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:31.101024   59960 cri.go:89] found id: ""
	I1126 20:13:31.101032   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:31.101092   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.106006   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.109883   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:31.110020   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:31.140073   59960 cri.go:89] found id: ""
	I1126 20:13:31.140098   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.140107   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:31.140114   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:31.140177   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:31.170126   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:31.170150   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:31.170155   59960 cri.go:89] found id: ""
	I1126 20:13:31.170163   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:31.170220   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.175522   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.180015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:31.180137   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:31.216744   59960 cri.go:89] found id: ""
	I1126 20:13:31.216771   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.216781   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:31.216787   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:31.216847   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:31.244620   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:31.244653   59960 cri.go:89] found id: ""
	I1126 20:13:31.244661   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:31.244727   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.248677   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:31.248770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:31.275812   59960 cri.go:89] found id: ""
	I1126 20:13:31.275890   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.275914   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:31.275936   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:31.275972   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:31.308954   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:31.308981   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:31.404058   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:31.404140   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:31.449144   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:31.449177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:31.526538   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:31.526575   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:31.613358   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:31.613393   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:31.626272   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:31.626300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:31.701051   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:31.692350   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.693035   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.694572   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.695120   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.696599   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:31.692350   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.693035   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.694572   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.695120   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.696599   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:31.701076   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:31.701089   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:31.726047   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:31.726075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:31.770205   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:31.770246   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:31.800872   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:31.800898   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.331337   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:34.343013   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:34.343079   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:34.369127   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:34.369186   59960 cri.go:89] found id: ""
	I1126 20:13:34.369220   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:34.369305   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.372919   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:34.372984   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:34.400785   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:34.400806   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:34.400811   59960 cri.go:89] found id: ""
	I1126 20:13:34.400818   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:34.400871   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.404967   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.408568   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:34.408648   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:34.434956   59960 cri.go:89] found id: ""
	I1126 20:13:34.434981   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.434990   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:34.434996   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:34.435051   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:34.472918   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:34.472943   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:34.472948   59960 cri.go:89] found id: ""
	I1126 20:13:34.472956   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:34.473009   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.476556   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.480021   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:34.480097   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:34.506491   59960 cri.go:89] found id: ""
	I1126 20:13:34.506513   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.506522   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:34.506528   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:34.506587   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:34.534595   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.534618   59960 cri.go:89] found id: ""
	I1126 20:13:34.534627   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:34.534681   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.542373   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:34.542487   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:34.569404   59960 cri.go:89] found id: ""
	I1126 20:13:34.569439   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.569449   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:34.569473   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:34.569491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:34.594901   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:34.594926   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:34.661252   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:34.661357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:34.736470   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:34.736504   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.767635   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:34.767659   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:34.849541   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:34.849578   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:34.890089   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:34.890122   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:34.918362   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:34.918390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:34.955774   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:34.955800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:35.056965   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:35.057001   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:35.078639   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:35.078668   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:35.151655   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:35.143337   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.143918   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.145438   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.146046   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.147630   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:35.143337   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.143918   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.145438   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.146046   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.147630   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:37.653306   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:37.665236   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:37.665306   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:37.692381   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:37.692404   59960 cri.go:89] found id: ""
	I1126 20:13:37.692420   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:37.692475   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.696411   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:37.696485   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:37.733416   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:37.733447   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:37.733452   59960 cri.go:89] found id: ""
	I1126 20:13:37.733459   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:37.733512   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.737487   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.740759   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:37.740827   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:37.770540   59960 cri.go:89] found id: ""
	I1126 20:13:37.770563   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.770571   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:37.770578   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:37.770645   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:37.798542   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:37.798566   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:37.798572   59960 cri.go:89] found id: ""
	I1126 20:13:37.798579   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:37.798632   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.802507   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.806007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:37.806128   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:37.831752   59960 cri.go:89] found id: ""
	I1126 20:13:37.831780   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.831789   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:37.831796   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:37.831911   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:37.859491   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:37.859516   59960 cri.go:89] found id: ""
	I1126 20:13:37.859526   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:37.859608   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.863305   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:37.863407   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:37.890262   59960 cri.go:89] found id: ""
	I1126 20:13:37.890324   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.890347   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:37.890370   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:37.890389   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:37.915303   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:37.915334   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:38.015981   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:38.016018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:38.028479   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:38.028518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:38.117235   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:38.107607   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.108494   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.110529   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.111224   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.112955   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:38.107607   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.108494   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.110529   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.111224   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.112955   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:38.117268   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:38.117293   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:38.146073   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:38.146106   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:38.223055   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:38.223091   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:38.256738   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:38.256769   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:38.284204   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:38.284234   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:38.322205   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:38.322237   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:38.365768   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:38.365800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:40.946037   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:40.957084   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:40.957219   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:40.988160   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:40.988223   59960 cri.go:89] found id: ""
	I1126 20:13:40.988247   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:40.988330   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:40.991862   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:40.991975   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:41.021645   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:41.021671   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:41.021676   59960 cri.go:89] found id: ""
	I1126 20:13:41.021683   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:41.021776   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.025458   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.028751   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:41.028818   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:41.055272   59960 cri.go:89] found id: ""
	I1126 20:13:41.055297   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.055306   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:41.055313   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:41.055373   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:41.083272   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:41.083293   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:41.083298   59960 cri.go:89] found id: ""
	I1126 20:13:41.083306   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:41.083361   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.089116   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.092770   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:41.092882   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:41.119939   59960 cri.go:89] found id: ""
	I1126 20:13:41.119969   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.119978   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:41.119985   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:41.120085   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:41.149635   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:41.149657   59960 cri.go:89] found id: ""
	I1126 20:13:41.149666   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:41.149719   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.153346   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:41.153420   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:41.180294   59960 cri.go:89] found id: ""
	I1126 20:13:41.180320   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.180329   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:41.180338   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:41.180350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:41.207608   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:41.207638   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:41.250184   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:41.250217   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:41.280787   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:41.280815   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:41.350595   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:41.339246   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.340025   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.341777   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.342622   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.345147   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:41.339246   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.340025   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.341777   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.342622   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.345147   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:41.350618   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:41.350631   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:41.395571   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:41.395607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:41.471537   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:41.471576   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:41.503158   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:41.503187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:41.581612   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:41.581647   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:41.616210   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:41.616238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:41.712278   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:41.712311   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:44.224835   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:44.235354   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:44.235427   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:44.262020   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:44.262040   59960 cri.go:89] found id: ""
	I1126 20:13:44.262047   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:44.262100   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.266500   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:44.266621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:44.293469   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:44.293492   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:44.293498   59960 cri.go:89] found id: ""
	I1126 20:13:44.293515   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:44.293592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.297513   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.301293   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:44.301379   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:44.331229   59960 cri.go:89] found id: ""
	I1126 20:13:44.331252   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.331260   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:44.331266   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:44.331326   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:44.358510   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:44.358529   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:44.358534   59960 cri.go:89] found id: ""
	I1126 20:13:44.358540   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:44.358597   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.362369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.365719   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:44.365788   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:44.401237   59960 cri.go:89] found id: ""
	I1126 20:13:44.401303   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.401326   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:44.401348   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:44.401437   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:44.428506   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:44.428524   59960 cri.go:89] found id: ""
	I1126 20:13:44.428537   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:44.428592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.432302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:44.432379   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:44.461193   59960 cri.go:89] found id: ""
	I1126 20:13:44.461216   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.461225   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:44.461234   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:44.461245   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:44.472842   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:44.472911   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:44.552602   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:44.536833   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.537581   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.546763   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.547452   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.548655   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:44.536833   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.537581   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.546763   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.547452   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.548655   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:44.552629   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:44.552642   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:44.579143   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:44.579171   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:44.608447   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:44.608472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:44.634421   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:44.634447   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:44.669334   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:44.669362   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:44.770710   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:44.770785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:44.815986   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:44.816016   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:44.860293   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:44.860327   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:44.936110   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:44.936144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:47.514839   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:47.528244   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:47.528398   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:47.557240   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:47.557263   59960 cri.go:89] found id: ""
	I1126 20:13:47.557271   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:47.557328   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.561044   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:47.561146   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:47.586866   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:47.586888   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:47.586894   59960 cri.go:89] found id: ""
	I1126 20:13:47.586901   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:47.586956   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.591194   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.594829   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:47.594905   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:47.621081   59960 cri.go:89] found id: ""
	I1126 20:13:47.621104   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.621113   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:47.621120   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:47.621182   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:47.649583   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:47.649605   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:47.649610   59960 cri.go:89] found id: ""
	I1126 20:13:47.649618   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:47.649673   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.655090   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.659029   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:47.659096   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:47.685101   59960 cri.go:89] found id: ""
	I1126 20:13:47.685125   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.685134   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:47.685141   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:47.685198   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:47.712581   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:47.712603   59960 cri.go:89] found id: ""
	I1126 20:13:47.712612   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:47.712673   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.716384   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:47.716461   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:47.746287   59960 cri.go:89] found id: ""
	I1126 20:13:47.746321   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.746330   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:47.746357   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:47.746375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:47.776577   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:47.776607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:47.810845   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:47.810874   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:47.851317   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:47.851350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:47.897021   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:47.897054   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:47.925761   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:47.925792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:47.953836   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:47.953863   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:48.054533   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:48.054569   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:48.074474   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:48.074505   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:48.148938   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:48.137331   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.137950   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.139682   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.140242   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.143726   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:48.137331   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.137950   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.139682   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.140242   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.143726   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:48.148963   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:48.148977   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:48.231199   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:48.231234   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:50.823233   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:50.833805   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:50.833878   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:50.862309   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:50.862333   59960 cri.go:89] found id: ""
	I1126 20:13:50.862342   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:50.862396   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.865957   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:50.866034   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:50.892542   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:50.892565   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:50.892571   59960 cri.go:89] found id: ""
	I1126 20:13:50.892578   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:50.892632   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.896328   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.899831   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:50.899905   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:50.931031   59960 cri.go:89] found id: ""
	I1126 20:13:50.931098   59960 logs.go:282] 0 containers: []
	W1126 20:13:50.931112   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:50.931119   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:50.931176   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:50.958547   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:50.958580   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:50.958586   59960 cri.go:89] found id: ""
	I1126 20:13:50.958594   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:50.958649   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.962711   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.966380   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:50.966453   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:50.998188   59960 cri.go:89] found id: ""
	I1126 20:13:50.998483   59960 logs.go:282] 0 containers: []
	W1126 20:13:50.998498   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:50.998505   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:50.998592   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:51.031422   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:51.031447   59960 cri.go:89] found id: ""
	I1126 20:13:51.031462   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:51.031519   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:51.035715   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:51.035788   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:51.077429   59960 cri.go:89] found id: ""
	I1126 20:13:51.077452   59960 logs.go:282] 0 containers: []
	W1126 20:13:51.077460   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:51.077469   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:51.077481   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:51.105578   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:51.105609   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:51.188473   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:51.188518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:51.220853   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:51.220886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:51.304811   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:51.304848   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:51.337094   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:51.337162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:51.434145   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:51.434183   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:51.474781   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:51.474815   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:51.523360   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:51.523390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:51.556210   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:51.556238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:51.568960   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:51.568989   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:51.646125   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:51.637986   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.638634   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640319   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640884   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.642607   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:51.637986   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.638634   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640319   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640884   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.642607   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:54.147140   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:54.159570   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:54.159641   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:54.190129   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:54.190150   59960 cri.go:89] found id: ""
	I1126 20:13:54.190158   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:54.190221   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.193723   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:54.193795   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:54.221859   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:54.221881   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:54.221886   59960 cri.go:89] found id: ""
	I1126 20:13:54.221893   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:54.221986   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.225619   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.229615   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:54.229686   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:54.257427   59960 cri.go:89] found id: ""
	I1126 20:13:54.257454   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.257464   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:54.257470   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:54.257528   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:54.283499   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:54.283522   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:54.283528   59960 cri.go:89] found id: ""
	I1126 20:13:54.283535   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:54.283591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.287279   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.291072   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:54.291164   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:54.320377   59960 cri.go:89] found id: ""
	I1126 20:13:54.320409   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.320418   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:54.320424   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:54.320490   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:54.346357   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:54.346388   59960 cri.go:89] found id: ""
	I1126 20:13:54.346397   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:54.346453   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.350217   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:54.350337   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:54.387000   59960 cri.go:89] found id: ""
	I1126 20:13:54.387033   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.387042   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:54.387052   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:54.387064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:54.398981   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:54.399006   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:54.424733   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:54.424761   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:54.464124   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:54.464199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:54.516097   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:54.516149   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:54.597621   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:54.597656   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:54.626882   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:54.626916   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:54.706226   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:54.706262   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:54.777575   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:54.768229   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.769042   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.770705   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.771452   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.773075   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:54.768229   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.769042   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.770705   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.771452   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.773075   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:54.777599   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:54.777612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:54.808526   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:54.808556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:54.839385   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:54.839412   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:57.435357   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:57.446250   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:57.446321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:57.476511   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:57.476531   59960 cri.go:89] found id: ""
	I1126 20:13:57.476539   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:57.476595   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.480521   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:57.480599   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:57.508216   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:57.508239   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:57.508244   59960 cri.go:89] found id: ""
	I1126 20:13:57.508251   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:57.508312   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.512264   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.515930   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:57.516007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:57.546712   59960 cri.go:89] found id: ""
	I1126 20:13:57.546737   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.546746   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:57.546753   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:57.546811   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:57.575286   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:57.575308   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:57.575314   59960 cri.go:89] found id: ""
	I1126 20:13:57.575321   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:57.575403   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.579177   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.582844   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:57.582947   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:57.610240   59960 cri.go:89] found id: ""
	I1126 20:13:57.610268   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.610276   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:57.610282   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:57.610366   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:57.637690   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:57.637715   59960 cri.go:89] found id: ""
	I1126 20:13:57.637722   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:57.637804   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.641691   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:57.641816   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:57.673478   59960 cri.go:89] found id: ""
	I1126 20:13:57.673512   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.673521   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:57.673546   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:57.673565   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:57.724644   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:57.724677   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:57.801587   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:57.801622   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:57.846990   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:57.847020   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:57.948301   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:57.948336   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:57.960477   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:57.960510   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:58.036195   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:58.028003   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.028530   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030166   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030875   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.032666   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:58.028003   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.028530   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030166   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030875   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.032666   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:58.036262   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:58.036289   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:58.071247   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:58.071284   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:58.102552   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:58.102582   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:58.131358   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:58.131450   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:58.207844   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:58.207883   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:00.754664   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:00.765702   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:00.765771   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:00.806554   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:00.806579   59960 cri.go:89] found id: ""
	I1126 20:14:00.806587   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:00.806641   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.810501   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:00.810586   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:00.838112   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:00.838139   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:00.838144   59960 cri.go:89] found id: ""
	I1126 20:14:00.838152   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:00.838207   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.842001   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.845613   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:00.845684   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:00.874701   59960 cri.go:89] found id: ""
	I1126 20:14:00.874726   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.874735   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:00.874742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:00.874821   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:00.903003   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:00.903027   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:00.903032   59960 cri.go:89] found id: ""
	I1126 20:14:00.903039   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:00.903097   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.907398   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.911095   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:00.911169   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:00.937717   59960 cri.go:89] found id: ""
	I1126 20:14:00.937741   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.937750   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:00.937757   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:00.937815   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:00.964659   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:00.964683   59960 cri.go:89] found id: ""
	I1126 20:14:00.964692   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:00.964761   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.969052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:00.969128   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:00.996896   59960 cri.go:89] found id: ""
	I1126 20:14:00.996921   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.996930   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:00.996940   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:00.996968   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:01.052982   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:01.053013   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:01.164358   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:01.164396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:01.245847   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:01.237260   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.238200   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.239244   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.240970   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.241435   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:01.237260   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.238200   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.239244   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.240970   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.241435   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:01.245874   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:01.245888   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:01.278036   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:01.278066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:01.321761   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:01.321798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:01.349850   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:01.349877   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:01.362087   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:01.362115   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:01.406110   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:01.406143   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:01.488538   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:01.488580   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:01.524108   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:01.524314   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:04.107171   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:04.119134   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:04.119206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:04.150892   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:04.150913   59960 cri.go:89] found id: ""
	I1126 20:14:04.150920   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:04.150993   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.154614   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:04.154713   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:04.181842   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:04.181866   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:04.181870   59960 cri.go:89] found id: ""
	I1126 20:14:04.181878   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:04.181958   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.185706   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.189884   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:04.190033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:04.217117   59960 cri.go:89] found id: ""
	I1126 20:14:04.217143   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.217152   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:04.217159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:04.217218   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:04.244873   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:04.244893   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:04.244897   59960 cri.go:89] found id: ""
	I1126 20:14:04.244904   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:04.244962   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.248633   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.252113   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:04.252223   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:04.281381   59960 cri.go:89] found id: ""
	I1126 20:14:04.281410   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.281420   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:04.281426   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:04.281484   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:04.309793   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:04.309817   59960 cri.go:89] found id: ""
	I1126 20:14:04.309825   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:04.309881   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.313555   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:04.313625   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:04.341073   59960 cri.go:89] found id: ""
	I1126 20:14:04.341100   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.341109   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:04.341117   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:04.341129   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:04.436704   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:04.436741   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:04.511848   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:04.500099   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.500700   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506376   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506925   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.508357   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:04.500099   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.500700   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506376   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506925   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.508357   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:04.511872   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:04.511887   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:04.572587   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:04.572662   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:04.622150   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:04.622182   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:04.648129   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:04.648200   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:04.736436   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:04.736472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:04.748750   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:04.748783   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:04.784731   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:04.784756   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:04.861032   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:04.861067   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:04.888273   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:04.888306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:07.422077   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:07.432698   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:07.432776   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:07.463525   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:07.463545   59960 cri.go:89] found id: ""
	I1126 20:14:07.463553   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:07.463605   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.467175   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:07.467243   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:07.497801   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:07.497821   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:07.497826   59960 cri.go:89] found id: ""
	I1126 20:14:07.497833   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:07.497888   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.501759   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.505120   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:07.505198   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:07.539084   59960 cri.go:89] found id: ""
	I1126 20:14:07.539112   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.539121   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:07.539127   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:07.539189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:07.567688   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:07.567713   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:07.567720   59960 cri.go:89] found id: ""
	I1126 20:14:07.567727   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:07.567788   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.571445   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.575895   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:07.575973   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:07.603679   59960 cri.go:89] found id: ""
	I1126 20:14:07.603704   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.603713   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:07.603720   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:07.603801   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:07.633845   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:07.633869   59960 cri.go:89] found id: ""
	I1126 20:14:07.633877   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:07.633982   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.638439   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:07.638510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:07.669305   59960 cri.go:89] found id: ""
	I1126 20:14:07.669329   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.669338   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:07.669348   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:07.669361   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:07.746001   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:07.746039   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:07.773829   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:07.773859   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:07.806673   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:07.806705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:07.847992   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:07.848029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:07.876479   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:07.876507   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:07.952982   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:07.953018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:08.054195   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:08.054235   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:08.071790   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:08.071819   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:08.158168   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:08.148798   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.150262   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.151831   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.152401   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.154098   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:08.148798   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.150262   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.151831   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.152401   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.154098   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:08.158237   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:08.158266   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:08.185227   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:08.185257   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:10.730401   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:10.741460   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:10.741529   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:10.774241   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:10.774263   59960 cri.go:89] found id: ""
	I1126 20:14:10.774270   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:10.774327   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.778033   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:10.778103   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:10.806991   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:10.807015   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:10.807021   59960 cri.go:89] found id: ""
	I1126 20:14:10.807028   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:10.807083   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.810846   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.814441   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:10.814513   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:10.843200   59960 cri.go:89] found id: ""
	I1126 20:14:10.843226   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.843236   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:10.843242   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:10.843301   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:10.871039   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:10.871062   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:10.871068   59960 cri.go:89] found id: ""
	I1126 20:14:10.871075   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:10.871129   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.874747   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.878577   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:10.878661   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:10.907317   59960 cri.go:89] found id: ""
	I1126 20:14:10.907343   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.907352   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:10.907359   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:10.907414   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:10.936274   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:10.936297   59960 cri.go:89] found id: ""
	I1126 20:14:10.936306   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:10.936385   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.939976   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:10.940048   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:10.969776   59960 cri.go:89] found id: ""
	I1126 20:14:10.969848   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.969884   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:10.969911   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:10.969997   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:11.067923   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:11.067964   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:11.082749   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:11.082781   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:11.124244   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:11.124281   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:11.173196   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:11.173232   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:11.200233   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:11.200268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:11.284292   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:11.284327   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:11.317517   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:11.317545   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:11.395020   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:11.386165   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.387087   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388651   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388979   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.390832   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:11.386165   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.387087   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388651   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388979   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.390832   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:11.395043   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:11.395056   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:11.422025   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:11.422059   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:11.500554   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:11.500588   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.028990   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:14.043196   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:14.043275   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:14.078393   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:14.078418   59960 cri.go:89] found id: ""
	I1126 20:14:14.078426   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:14.078485   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.082581   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:14.082679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:14.113586   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:14.113611   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:14.113616   59960 cri.go:89] found id: ""
	I1126 20:14:14.113623   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:14.113677   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.117367   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.120847   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:14.120921   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:14.147191   59960 cri.go:89] found id: ""
	I1126 20:14:14.147214   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.147222   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:14.147229   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:14.147287   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:14.173461   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:14.173483   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.173489   59960 cri.go:89] found id: ""
	I1126 20:14:14.173496   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:14.173560   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.177359   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.180846   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:14.180926   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:14.211699   59960 cri.go:89] found id: ""
	I1126 20:14:14.211731   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.211740   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:14.211747   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:14.211815   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:14.245320   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:14.245343   59960 cri.go:89] found id: ""
	I1126 20:14:14.245352   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:14.245422   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.249066   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:14.249133   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:14.277385   59960 cri.go:89] found id: ""
	I1126 20:14:14.277407   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.277415   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:14.277424   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:14.277436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:14.289839   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:14.289866   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:14.361142   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:14.352896   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.353542   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355081   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355655   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.357173   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:14.352896   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.353542   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355081   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355655   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.357173   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:14.361165   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:14.361179   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:14.419666   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:14.419762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:14.468633   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:14.468667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:14.557664   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:14.557696   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:14.583538   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:14.583567   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.612806   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:14.612834   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:14.638272   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:14.638300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:14.721230   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:14.721268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:14.755109   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:14.755142   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:17.358125   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:17.371898   59960 out.go:203] 
	W1126 20:14:17.375212   59960 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1126 20:14:17.375248   59960 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1126 20:14:17.375258   59960 out.go:285] * Related issues:
	* Related issues:
	W1126 20:14:17.375279   59960 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1126 20:14:17.375299   59960 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1126 20:14:17.378409   59960 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-arm64 -p ha-278127 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-278127
helpers_test.go:243: (dbg) docker inspect ha-278127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd",
	        "Created": "2025-11-26T19:57:51.94382214Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 60086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:06:25.13540784Z",
	            "FinishedAt": "2025-11-26T20:06:24.397214575Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/hosts",
	        "LogPath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd-json.log",
	        "Name": "/ha-278127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-278127:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-278127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd",
	                "LowerDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-278127",
	                "Source": "/var/lib/docker/volumes/ha-278127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-278127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-278127",
	                "name.minikube.sigs.k8s.io": "ha-278127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb3aaf333e9f66a1f0a54705c2952cf94a31e67f170d0e073ad505006b4613f7",
	            "SandboxKey": "/var/run/docker/netns/cb3aaf333e9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-278127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:6e:15:9f:21:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "20cb65a83ad57cf8581cf982a5b25f381be527698b87a783139e32a436f750e9",
	                    "EndpointID": "217fa13f4a876f9a733e9c88a45d94a8aabe2f981d6e4c092ca2c647767455d3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-278127",
	                        "0081e5a17ed5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-278127 -n ha-278127
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 logs -n 25: (2.279597339s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-278127 cp ha-278127-m03:/home/docker/cp-test.txt ha-278127-m04:/home/docker/cp-test_ha-278127-m03_ha-278127-m04.txt               │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test_ha-278127-m03_ha-278127-m04.txt                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp testdata/cp-test.txt ha-278127-m04:/home/docker/cp-test.txt                                                             │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2837002730/001/cp-test_ha-278127-m04.txt │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127:/home/docker/cp-test_ha-278127-m04_ha-278127.txt                       │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127 sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127.txt                                                 │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127-m02:/home/docker/cp-test_ha-278127-m04_ha-278127-m02.txt               │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m02 sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127-m02.txt                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127-m03:/home/docker/cp-test_ha-278127-m04_ha-278127-m03.txt               │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m03 sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127-m03.txt                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ node    │ ha-278127 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ node    │ ha-278127 node start m02 --alsologtostderr -v 5                                                                                      │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:03 UTC │
	│ node    │ ha-278127 node list --alsologtostderr -v 5                                                                                           │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:03 UTC │                     │
	│ stop    │ ha-278127 stop --alsologtostderr -v 5                                                                                                │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:03 UTC │ 26 Nov 25 20:04 UTC │
	│ start   │ ha-278127 start --wait true --alsologtostderr -v 5                                                                                   │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:04 UTC │ 26 Nov 25 20:05 UTC │
	│ node    │ ha-278127 node list --alsologtostderr -v 5                                                                                           │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:05 UTC │                     │
	│ node    │ ha-278127 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:05 UTC │ 26 Nov 25 20:05 UTC │
	│ stop    │ ha-278127 stop --alsologtostderr -v 5                                                                                                │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:05 UTC │ 26 Nov 25 20:06 UTC │
	│ start   │ ha-278127 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:06:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:06:24.854734   59960 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:06:24.854900   59960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:06:24.854911   59960 out.go:374] Setting ErrFile to fd 2...
	I1126 20:06:24.854917   59960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:06:24.855178   59960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:06:24.855529   59960 out.go:368] Setting JSON to false
	I1126 20:06:24.856339   59960 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2915,"bootTime":1764184670,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:06:24.856415   59960 start.go:143] virtualization:  
	I1126 20:06:24.859567   59960 out.go:179] * [ha-278127] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:06:24.863328   59960 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:06:24.863432   59960 notify.go:221] Checking for updates...
	I1126 20:06:24.869239   59960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:06:24.872146   59960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:24.874915   59960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:06:24.877742   59960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:06:24.880612   59960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:06:24.883943   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:24.884479   59960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:06:24.917824   59960 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:06:24.917967   59960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:06:24.982581   59960 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-26 20:06:24.973603153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:06:24.982686   59960 docker.go:319] overlay module found
	I1126 20:06:24.986072   59960 out.go:179] * Using the docker driver based on existing profile
	I1126 20:06:24.989065   59960 start.go:309] selected driver: docker
	I1126 20:06:24.989102   59960 start.go:927] validating driver "docker" against &{Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:24.989232   59960 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:06:24.989341   59960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:06:25.048426   59960 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-26 20:06:25.038525674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:06:25.048890   59960 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:06:25.048924   59960 cni.go:84] Creating CNI manager for ""
	I1126 20:06:25.048991   59960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1126 20:06:25.049039   59960 start.go:353] cluster config:
	{Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:25.052236   59960 out.go:179] * Starting "ha-278127" primary control-plane node in "ha-278127" cluster
	I1126 20:06:25.055057   59960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:06:25.058039   59960 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:06:25.061008   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:25.061089   59960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:06:25.061106   59960 cache.go:65] Caching tarball of preloaded images
	I1126 20:06:25.061005   59960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:06:25.061198   59960 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:06:25.061210   59960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:06:25.061353   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:25.080808   59960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:06:25.080831   59960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:06:25.080846   59960 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:06:25.080876   59960 start.go:360] acquireMachinesLock for ha-278127: {Name:mkb106a4eb425a1b9d0e59976741b3f940666d17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:06:25.080933   59960 start.go:364] duration metric: took 35.659µs to acquireMachinesLock for "ha-278127"
	I1126 20:06:25.080951   59960 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:06:25.080956   59960 fix.go:54] fixHost starting: 
	I1126 20:06:25.081217   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:06:25.097737   59960 fix.go:112] recreateIfNeeded on ha-278127: state=Stopped err=<nil>
	W1126 20:06:25.097772   59960 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:06:25.101061   59960 out.go:252] * Restarting existing docker container for "ha-278127" ...
	I1126 20:06:25.101155   59960 cli_runner.go:164] Run: docker start ha-278127
	I1126 20:06:25.385420   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:06:25.411970   59960 kic.go:430] container "ha-278127" state is running.
	I1126 20:06:25.412392   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:25.431941   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:25.432192   59960 machine.go:94] provisionDockerMachine start ...
	I1126 20:06:25.432251   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:25.452939   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:25.453252   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:25.453261   59960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:06:25.454097   59960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44664->127.0.0.1:32828: read: connection reset by peer
	I1126 20:06:28.605461   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127
	
	I1126 20:06:28.605490   59960 ubuntu.go:182] provisioning hostname "ha-278127"
	I1126 20:06:28.605558   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:28.623455   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:28.623769   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:28.623786   59960 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-278127 && echo "ha-278127" | sudo tee /etc/hostname
	I1126 20:06:28.778155   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127
	
	I1126 20:06:28.778256   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:28.794949   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:28.795250   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:28.795271   59960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-278127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-278127/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-278127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:06:28.942212   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:06:28.942238   59960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:06:28.942272   59960 ubuntu.go:190] setting up certificates
	I1126 20:06:28.942281   59960 provision.go:84] configureAuth start
	I1126 20:06:28.942355   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:28.960559   59960 provision.go:143] copyHostCerts
	I1126 20:06:28.960617   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:28.960653   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:06:28.960666   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:28.960744   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:06:28.960844   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:28.960866   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:06:28.960877   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:28.960906   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:06:28.960964   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:28.960985   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:06:28.960993   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:28.961023   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:06:28.961088   59960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.ha-278127 san=[127.0.0.1 192.168.49.2 ha-278127 localhost minikube]
	I1126 20:06:29.153972   59960 provision.go:177] copyRemoteCerts
	I1126 20:06:29.154049   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:06:29.154092   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.171236   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.273352   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1126 20:06:29.273420   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:06:29.290237   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1126 20:06:29.290299   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1126 20:06:29.307794   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1126 20:06:29.307855   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:06:29.325356   59960 provision.go:87] duration metric: took 383.045342ms to configureAuth
	I1126 20:06:29.325387   59960 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:06:29.325626   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:29.325742   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.342790   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:29.343103   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:29.343131   59960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:06:29.721722   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:06:29.721744   59960 machine.go:97] duration metric: took 4.28954331s to provisionDockerMachine
	I1126 20:06:29.721770   59960 start.go:293] postStartSetup for "ha-278127" (driver="docker")
	I1126 20:06:29.721791   59960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:06:29.721855   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:06:29.721907   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.742288   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.845365   59960 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:06:29.848307   59960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:06:29.848344   59960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:06:29.848355   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:06:29.848405   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:06:29.848509   59960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:06:29.848521   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /etc/ssl/certs/41292.pem
	I1126 20:06:29.848614   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:06:29.855777   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:29.872505   59960 start.go:296] duration metric: took 150.71913ms for postStartSetup
	I1126 20:06:29.872582   59960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:06:29.872629   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.889019   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.990934   59960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:06:29.995268   59960 fix.go:56] duration metric: took 4.914304894s for fixHost
	I1126 20:06:29.995338   59960 start.go:83] releasing machines lock for "ha-278127", held for 4.914396494s
	I1126 20:06:29.995443   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:30.012377   59960 ssh_runner.go:195] Run: cat /version.json
	I1126 20:06:30.012396   59960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:06:30.012433   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:30.012448   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:30.031079   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:30.032530   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:30.145909   59960 ssh_runner.go:195] Run: systemctl --version
	I1126 20:06:30.239511   59960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:06:30.276317   59960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:06:30.280821   59960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:06:30.280919   59960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:06:30.288826   59960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:06:30.288852   59960 start.go:496] detecting cgroup driver to use...
	I1126 20:06:30.288908   59960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:06:30.288973   59960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:06:30.304277   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:06:30.316900   59960 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:06:30.316968   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:06:30.332722   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:06:30.345857   59960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:06:30.458910   59960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:06:30.568914   59960 docker.go:234] disabling docker service ...
	I1126 20:06:30.568992   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:06:30.584111   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:06:30.596826   59960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:06:30.712581   59960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:06:30.831709   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:06:30.843921   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:06:30.857895   59960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:06:30.858007   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.867693   59960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:06:30.867809   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.876639   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.885174   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.893801   59960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:06:30.901606   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.910405   59960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.918408   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.927292   59960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:06:30.934726   59960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:06:30.941996   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:31.058637   59960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:06:31.242820   59960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:06:31.242889   59960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:06:31.246945   59960 start.go:564] Will wait 60s for crictl version
	I1126 20:06:31.247023   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:06:31.250523   59960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:06:31.274233   59960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:06:31.274317   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:06:31.302783   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:06:31.335292   59960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:06:31.338152   59960 cli_runner.go:164] Run: docker network inspect ha-278127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:06:31.354467   59960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 20:06:31.358251   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:06:31.368693   59960 kubeadm.go:884] updating cluster {Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:06:31.368839   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:31.368891   59960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:06:31.403727   59960 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:06:31.403752   59960 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:06:31.404010   59960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:06:31.431423   59960 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:06:31.431446   59960 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:06:31.431457   59960 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1126 20:06:31.431560   59960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-278127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:06:31.431642   59960 ssh_runner.go:195] Run: crio config
	I1126 20:06:31.500147   59960 cni.go:84] Creating CNI manager for ""
	I1126 20:06:31.500186   59960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1126 20:06:31.500211   59960 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:06:31.500236   59960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-278127 NodeName:ha-278127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:06:31.500354   59960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-278127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:06:31.500372   59960 kube-vip.go:115] generating kube-vip config ...
	I1126 20:06:31.500428   59960 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1126 20:06:31.512046   59960 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:06:31.512210   59960 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1126 20:06:31.512299   59960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:06:31.519877   59960 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:06:31.519973   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1126 20:06:31.527497   59960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1126 20:06:31.540828   59960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:06:31.553623   59960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1126 20:06:31.566105   59960 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1126 20:06:31.578838   59960 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1126 20:06:31.582461   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:06:31.592186   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:31.707439   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:06:31.722268   59960 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127 for IP: 192.168.49.2
	I1126 20:06:31.722291   59960 certs.go:195] generating shared ca certs ...
	I1126 20:06:31.722307   59960 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:31.722445   59960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:06:31.722497   59960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:06:31.722508   59960 certs.go:257] generating profile certs ...
	I1126 20:06:31.722593   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key
	I1126 20:06:31.722624   59960 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab
	I1126 20:06:31.722643   59960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1126 20:06:32.010576   59960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab ...
	I1126 20:06:32.010610   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab: {Name:mk952cf244227c47330a0f303648b46942398499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.010819   59960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab ...
	I1126 20:06:32.010835   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab: {Name:mk44577b028f8c1bee471863ff089cc458df619d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.010930   59960 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt
	I1126 20:06:32.011078   59960 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key
	I1126 20:06:32.011225   59960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key
	I1126 20:06:32.011244   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1126 20:06:32.011263   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1126 20:06:32.011280   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1126 20:06:32.011297   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1126 20:06:32.011315   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1126 20:06:32.011331   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1126 20:06:32.011348   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1126 20:06:32.011362   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1126 20:06:32.011414   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:06:32.011456   59960 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:06:32.011469   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:06:32.011501   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:06:32.011530   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:06:32.011558   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:06:32.011608   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:32.011640   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.011656   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.011666   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem -> /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.012331   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:06:32.032881   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:06:32.054562   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:06:32.072828   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:06:32.091195   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:06:32.109160   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:06:32.126721   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:06:32.143729   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:06:32.162210   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:06:32.179022   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:06:32.196402   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:06:32.213770   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:06:32.227414   59960 ssh_runner.go:195] Run: openssl version
	I1126 20:06:32.233654   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:06:32.243718   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.247376   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.247448   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.289532   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:06:32.297668   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:06:32.306080   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.309793   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.309880   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.353652   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:06:32.364544   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:06:32.373430   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.381651   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.381803   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.434961   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:06:32.448704   59960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:06:32.454552   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:06:32.518905   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:06:32.599420   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:06:32.673604   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:06:32.734602   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:06:32.794948   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:06:32.842245   59960 kubeadm.go:401] StartCluster: {Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:32.842417   59960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:06:32.842512   59960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:06:32.887488   59960 cri.go:89] found id: "f5647f1652cc11a195a49a98906391e791c3136916a5e3c249907585088fad42"
	I1126 20:06:32.887548   59960 cri.go:89] found id: "1ed2c42e7047cc402ab04fdadafa16acc5208b12eede0475826c97d34c9a071f"
	I1126 20:06:32.887577   59960 cri.go:89] found id: "040a8549001808f2d3fce3d4cf9f8dff272706173960c5e8004af8b1ea042e80"
	I1126 20:06:32.887595   59960 cri.go:89] found id: "106da3c0ad4fa03ae491f571375cda1a123fe52e6f7ef39170a84c273267c713"
	I1126 20:06:32.887614   59960 cri.go:89] found id: "cdc1651fea8f10bd665928dcc7bb174b74385eb06e911da9629df17c0d9d29e8"
	I1126 20:06:32.887650   59960 cri.go:89] found id: ""
	I1126 20:06:32.887728   59960 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:06:32.910884   59960 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:06:32Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:06:32.911021   59960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:06:32.933474   59960 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:06:32.933554   59960 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:06:32.933631   59960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:06:32.956246   59960 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:06:32.956760   59960 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-278127" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:32.956919   59960 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-2326/kubeconfig needs updating (will repair): [kubeconfig missing "ha-278127" cluster setting kubeconfig missing "ha-278127" context setting]
	I1126 20:06:32.957299   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.957946   59960 kapi.go:59] client config for ha-278127: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:06:32.958772   59960 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1126 20:06:32.958857   59960 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1126 20:06:32.958878   59960 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1126 20:06:32.958921   59960 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1126 20:06:32.958940   59960 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1126 20:06:32.958837   59960 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1126 20:06:32.959354   59960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:06:32.974056   59960 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1126 20:06:32.974125   59960 kubeadm.go:602] duration metric: took 40.551528ms to restartPrimaryControlPlane
	I1126 20:06:32.974150   59960 kubeadm.go:403] duration metric: took 131.91251ms to StartCluster
	I1126 20:06:32.974180   59960 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.974282   59960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:32.974978   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.975243   59960 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:06:32.975297   59960 start.go:242] waiting for startup goroutines ...
	I1126 20:06:32.975325   59960 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:06:32.975918   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:32.981231   59960 out.go:179] * Enabled addons: 
	I1126 20:06:32.984100   59960 addons.go:530] duration metric: took 8.777007ms for enable addons: enabled=[]
	I1126 20:06:32.984180   59960 start.go:247] waiting for cluster config update ...
	I1126 20:06:32.984203   59960 start.go:256] writing updated cluster config ...
	I1126 20:06:32.987492   59960 out.go:203] 
	I1126 20:06:32.990613   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:32.990800   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:32.994017   59960 out.go:179] * Starting "ha-278127-m02" control-plane node in "ha-278127" cluster
	I1126 20:06:32.996802   59960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:06:32.999792   59960 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:06:33.002700   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:33.002740   59960 cache.go:65] Caching tarball of preloaded images
	I1126 20:06:33.002860   59960 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:06:33.002893   59960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:06:33.003031   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:33.003254   59960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:06:33.039303   59960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:06:33.039323   59960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:06:33.039336   59960 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:06:33.039360   59960 start.go:360] acquireMachinesLock for ha-278127-m02: {Name:mkfa715e07e067116cf6c4854164186af5a39436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:06:33.039417   59960 start.go:364] duration metric: took 41.518µs to acquireMachinesLock for "ha-278127-m02"
	I1126 20:06:33.039439   59960 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:06:33.039445   59960 fix.go:54] fixHost starting: m02
	I1126 20:06:33.039721   59960 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:06:33.071417   59960 fix.go:112] recreateIfNeeded on ha-278127-m02: state=Stopped err=<nil>
	W1126 20:06:33.071449   59960 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:06:33.074580   59960 out.go:252] * Restarting existing docker container for "ha-278127-m02" ...
	I1126 20:06:33.074664   59960 cli_runner.go:164] Run: docker start ha-278127-m02
	I1126 20:06:33.452368   59960 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:06:33.483474   59960 kic.go:430] container "ha-278127-m02" state is running.
	I1126 20:06:33.483869   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:33.512602   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:33.512851   59960 machine.go:94] provisionDockerMachine start ...
	I1126 20:06:33.512917   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:33.539611   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:33.539907   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:33.539915   59960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:06:33.540557   59960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35216->127.0.0.1:32833: read: connection reset by peer
	I1126 20:06:36.755151   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127-m02
	
	I1126 20:06:36.755173   59960 ubuntu.go:182] provisioning hostname "ha-278127-m02"
	I1126 20:06:36.755238   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:36.783610   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:36.783923   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:36.783950   59960 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-278127-m02 && echo "ha-278127-m02" | sudo tee /etc/hostname
	I1126 20:06:37.026368   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127-m02
	
	I1126 20:06:37.026488   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:37.056257   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:37.056574   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:37.056592   59960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-278127-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-278127-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-278127-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:06:37.278605   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:06:37.278692   59960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:06:37.278724   59960 ubuntu.go:190] setting up certificates
	I1126 20:06:37.278764   59960 provision.go:84] configureAuth start
	I1126 20:06:37.278849   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:37.306165   59960 provision.go:143] copyHostCerts
	I1126 20:06:37.306207   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:37.306246   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:06:37.306253   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:37.306332   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:06:37.306421   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:37.306441   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:06:37.306445   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:37.306474   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:06:37.306512   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:37.306528   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:06:37.306532   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:37.306553   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:06:37.306602   59960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.ha-278127-m02 san=[127.0.0.1 192.168.49.3 ha-278127-m02 localhost minikube]
	I1126 20:06:37.781886   59960 provision.go:177] copyRemoteCerts
	I1126 20:06:37.782050   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:06:37.782113   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:37.799978   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:37.920744   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1126 20:06:37.920800   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:06:37.946353   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1126 20:06:37.946424   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 20:06:37.990628   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1126 20:06:37.990734   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:06:38.022932   59960 provision.go:87] duration metric: took 744.14174ms to configureAuth
	I1126 20:06:38.022999   59960 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:06:38.023281   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:38.023419   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:38.055902   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:38.056219   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:38.056232   59960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:06:39.163004   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:06:39.163066   59960 machine.go:97] duration metric: took 5.650194842s to provisionDockerMachine
	I1126 20:06:39.163087   59960 start.go:293] postStartSetup for "ha-278127-m02" (driver="docker")
	I1126 20:06:39.163098   59960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:06:39.163204   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:06:39.163258   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.194111   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.327619   59960 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:06:39.331483   59960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:06:39.331507   59960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:06:39.331518   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:06:39.331574   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:06:39.331649   59960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:06:39.331655   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /etc/ssl/certs/41292.pem
	I1126 20:06:39.331756   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:06:39.344886   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:39.377797   59960 start.go:296] duration metric: took 214.695598ms for postStartSetup
	I1126 20:06:39.377880   59960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:06:39.377991   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.402878   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.525023   59960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:06:39.531527   59960 fix.go:56] duration metric: took 6.492076268s for fixHost
	I1126 20:06:39.531551   59960 start.go:83] releasing machines lock for "ha-278127-m02", held for 6.492125467s
	I1126 20:06:39.531622   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:39.571062   59960 out.go:179] * Found network options:
	I1126 20:06:39.574101   59960 out.go:179]   - NO_PROXY=192.168.49.2
	W1126 20:06:39.577135   59960 proxy.go:120] fail to check proxy env: Error ip not in block
	W1126 20:06:39.577189   59960 proxy.go:120] fail to check proxy env: Error ip not in block
	I1126 20:06:39.577283   59960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:06:39.577298   59960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:06:39.577325   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.577353   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.610149   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.618182   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.847910   59960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:06:39.986067   59960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:06:39.986218   59960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:06:40.010567   59960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:06:40.010651   59960 start.go:496] detecting cgroup driver to use...
	I1126 20:06:40.010701   59960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:06:40.010777   59960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:06:40.066499   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:06:40.113187   59960 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:06:40.113357   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:06:40.138505   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:06:40.165558   59960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:06:40.434812   59960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:06:40.667360   59960 docker.go:234] disabling docker service ...
	I1126 20:06:40.667485   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:06:40.689020   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:06:40.712251   59960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:06:41.062262   59960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:06:41.446879   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:06:41.479018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:06:41.522736   59960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:06:41.522836   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.550554   59960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:06:41.550640   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.568877   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.605965   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.634535   59960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:06:41.647439   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.679616   59960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.700895   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.724575   59960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:06:41.743621   59960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:06:41.761053   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:42.179518   59960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:08:12.654700   59960 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.475140858s)
	I1126 20:08:12.654725   59960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:08:12.654777   59960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:08:12.658561   59960 start.go:564] Will wait 60s for crictl version
	I1126 20:08:12.658629   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:08:12.662122   59960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:08:12.694230   59960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:08:12.694320   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:08:12.723516   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:08:12.752895   59960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:08:12.755800   59960 out.go:179]   - env NO_PROXY=192.168.49.2
	I1126 20:08:12.758681   59960 cli_runner.go:164] Run: docker network inspect ha-278127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:08:12.774831   59960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 20:08:12.778729   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:08:12.788193   59960 mustload.go:66] Loading cluster: ha-278127
	I1126 20:08:12.788437   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:08:12.788732   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:08:12.805367   59960 host.go:66] Checking if "ha-278127" exists ...
	I1126 20:08:12.805673   59960 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127 for IP: 192.168.49.3
	I1126 20:08:12.805688   59960 certs.go:195] generating shared ca certs ...
	I1126 20:08:12.805703   59960 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:08:12.805829   59960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:08:12.805875   59960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:08:12.805885   59960 certs.go:257] generating profile certs ...
	I1126 20:08:12.806061   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key
	I1126 20:08:12.806134   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.28ad082f
	I1126 20:08:12.806177   59960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key
	I1126 20:08:12.806189   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1126 20:08:12.806203   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1126 20:08:12.806214   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1126 20:08:12.806227   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1126 20:08:12.806238   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1126 20:08:12.806249   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1126 20:08:12.806265   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1126 20:08:12.806276   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1126 20:08:12.806330   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:08:12.806364   59960 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:08:12.806376   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:08:12.806404   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:08:12.806431   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:08:12.806458   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:08:12.806505   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:08:12.806543   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:12.806557   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem -> /usr/share/ca-certificates/4129.pem
	I1126 20:08:12.806568   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /usr/share/ca-certificates/41292.pem
	I1126 20:08:12.806631   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:08:12.824408   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:08:12.926228   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1126 20:08:12.930801   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1126 20:08:12.939401   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1126 20:08:12.947934   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1126 20:08:12.960335   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1126 20:08:12.964526   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1126 20:08:12.973104   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1126 20:08:12.978204   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1126 20:08:12.987576   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1126 20:08:12.991901   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1126 20:08:13.001289   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1126 20:08:13.006200   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1126 20:08:13.014443   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:08:13.039341   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:08:13.063520   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:08:13.085219   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:08:13.103037   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:08:13.123095   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:08:13.140681   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:08:13.160781   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:08:13.180406   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:08:13.200475   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:08:13.221024   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:08:13.239900   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1126 20:08:13.254738   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1126 20:08:13.269631   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1126 20:08:13.285317   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1126 20:08:13.300359   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1126 20:08:13.320893   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1126 20:08:13.340300   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1126 20:08:13.361527   59960 ssh_runner.go:195] Run: openssl version
	I1126 20:08:13.368555   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:08:13.377244   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.381511   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.381624   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.427936   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:08:13.437023   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:08:13.445274   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.449571   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.449682   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.496315   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:08:13.504808   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:08:13.513181   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.517313   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.517396   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.579337   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:08:13.588179   59960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:08:13.593330   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:08:13.645107   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:08:13.691020   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:08:13.735436   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:08:13.780762   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:08:13.830095   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:08:13.873290   59960 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1126 20:08:13.873415   59960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-278127-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:08:13.873445   59960 kube-vip.go:115] generating kube-vip config ...
	I1126 20:08:13.873508   59960 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1126 20:08:13.885513   59960 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:08:13.885577   59960 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1126 20:08:13.885657   59960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:08:13.893550   59960 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:08:13.893628   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1126 20:08:13.901912   59960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1126 20:08:13.916015   59960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:08:13.934936   59960 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1126 20:08:13.979363   59960 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1126 20:08:13.991396   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:08:14.018397   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:08:14.385132   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:08:14.402828   59960 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:08:14.403147   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:08:14.408967   59960 out.go:179] * Verifying Kubernetes components...
	I1126 20:08:14.411916   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:08:14.659853   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:08:14.678979   59960 kapi.go:59] client config for ha-278127: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1126 20:08:14.679061   59960 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1126 20:08:14.679322   59960 node_ready.go:35] waiting up to 6m0s for node "ha-278127-m02" to be "Ready" ...
	I1126 20:08:15.269402   59960 node_ready.go:49] node "ha-278127-m02" is "Ready"
	I1126 20:08:15.269438   59960 node_ready.go:38] duration metric: took 590.083677ms for node "ha-278127-m02" to be "Ready" ...
	I1126 20:08:15.269450   59960 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:08:15.269508   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:15.770378   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:16.271005   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:16.769624   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:17.269646   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:17.770292   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:18.270233   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:18.770225   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:19.269626   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:19.770251   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:20.270592   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:20.769691   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:21.269742   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:21.769575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:22.269640   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:22.770094   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:23.269745   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:23.770093   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:24.269839   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:24.770626   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:25.270510   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:25.770352   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:26.270238   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:26.770199   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:27.270553   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:27.770570   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:28.269631   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:28.770575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:29.269663   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:29.770438   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:30.269733   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:30.769570   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:31.269688   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:31.770556   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:32.270505   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:32.770152   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:33.269716   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:33.769765   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:34.269659   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:34.769641   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:35.269866   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:35.770030   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:36.270158   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:36.770014   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:37.270234   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:37.769610   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:38.270567   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:38.770558   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:39.269653   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:39.769895   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:40.270407   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:40.769781   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:41.270338   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:41.770411   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:42.269686   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:42.770028   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:43.269580   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:43.769636   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:44.269684   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:44.769627   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:45.272055   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:45.770418   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:46.269657   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:46.770575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:47.270036   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:47.770377   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:48.270502   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:48.770450   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:49.269719   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:49.770449   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:50.269903   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:50.769675   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:51.270539   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:51.770618   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:52.270336   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:52.770354   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:53.270340   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:53.769901   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:54.270054   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:54.769747   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:55.270283   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:55.770525   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:56.269881   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:56.769908   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:57.269834   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:57.769631   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:58.270414   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:58.770529   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:59.269820   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:59.770577   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:00.269749   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:00.770275   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:01.270165   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:01.769910   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:02.269673   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:02.770492   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:03.270339   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:03.769642   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:04.269668   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:04.770177   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:05.270062   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:05.770571   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:06.270286   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:06.770466   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:07.269878   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:07.770593   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:08.270292   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:08.770068   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:09.269767   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:09.769619   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:10.270146   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:10.769659   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:11.270311   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:11.770596   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:12.269893   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:12.769649   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:13.270341   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:13.770530   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:14.269596   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:14.769532   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:14.769644   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:14.805181   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:14.805204   59960 cri.go:89] found id: ""
	I1126 20:09:14.805213   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:14.805269   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.809129   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:14.809206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:14.835451   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:14.835475   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:14.835480   59960 cri.go:89] found id: ""
	I1126 20:09:14.835487   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:14.835543   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.839249   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.842501   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:14.842574   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:14.867922   59960 cri.go:89] found id: ""
	I1126 20:09:14.867948   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.867957   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:14.867963   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:14.868022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:14.893599   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:14.893625   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:14.893630   59960 cri.go:89] found id: ""
	I1126 20:09:14.893638   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:14.893730   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.897540   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.901438   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:14.901540   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:14.929244   59960 cri.go:89] found id: ""
	I1126 20:09:14.929268   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.929277   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:14.929284   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:14.929340   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:14.956242   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:14.956264   59960 cri.go:89] found id: ""
	I1126 20:09:14.956272   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:14.956326   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.960197   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:14.960271   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:14.985332   59960 cri.go:89] found id: ""
	I1126 20:09:14.985407   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.985428   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:14.985455   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:14.985495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:15.015412   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:15.015491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:15.446082   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:15.438231    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.438877    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440458    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440891    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.442380    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:15.438231    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.438877    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440458    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440891    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.442380    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:15.446107   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:15.446122   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:15.474426   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:15.474452   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:15.514330   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:15.514364   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:15.582633   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:15.582662   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:15.636475   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:15.636508   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:15.718181   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:15.718215   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:15.814217   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:15.814253   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:15.826793   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:15.826823   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:15.854520   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:15.854550   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.382038   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:18.401602   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:18.401678   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:18.435808   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:18.435831   59960 cri.go:89] found id: ""
	I1126 20:09:18.435839   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:18.435907   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.439686   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:18.439801   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:18.476740   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:18.476764   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:18.476770   59960 cri.go:89] found id: ""
	I1126 20:09:18.476787   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:18.476889   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.480732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.484682   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:18.484783   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:18.511910   59960 cri.go:89] found id: ""
	I1126 20:09:18.511974   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.511989   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:18.511996   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:18.512055   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:18.547921   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:18.547988   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:18.548006   59960 cri.go:89] found id: ""
	I1126 20:09:18.548014   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:18.548071   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.552076   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.556982   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:18.557066   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:18.587286   59960 cri.go:89] found id: ""
	I1126 20:09:18.587313   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.587333   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:18.587340   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:18.587401   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:18.620541   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.620559   59960 cri.go:89] found id: ""
	I1126 20:09:18.620567   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:18.620626   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.624723   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:18.624796   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:18.653037   59960 cri.go:89] found id: ""
	I1126 20:09:18.653060   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.653068   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:18.653077   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:18.653090   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.684308   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:18.684335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:18.776764   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:18.776798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:18.865581   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:18.856655    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858014    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858939    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.859710    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.861248    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:18.856655    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858014    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858939    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.859710    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.861248    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:18.865603   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:18.865616   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:18.909234   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:18.909270   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:18.960436   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:18.960477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:18.990735   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:18.990766   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:19.069643   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:19.069722   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:19.104112   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:19.104137   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:19.118175   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:19.118204   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:19.148200   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:19.148229   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:21.687827   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:21.698536   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:21.698621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:21.730147   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:21.730171   59960 cri.go:89] found id: ""
	I1126 20:09:21.730180   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:21.730235   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.735922   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:21.736012   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:21.763452   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:21.763481   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:21.763486   59960 cri.go:89] found id: ""
	I1126 20:09:21.763494   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:21.763551   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.767451   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.771041   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:21.771140   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:21.803663   59960 cri.go:89] found id: ""
	I1126 20:09:21.803688   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.803697   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:21.803703   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:21.803767   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:21.832470   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:21.832496   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:21.832501   59960 cri.go:89] found id: ""
	I1126 20:09:21.832510   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:21.832567   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.836410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.840076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:21.840157   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:21.866968   59960 cri.go:89] found id: ""
	I1126 20:09:21.866994   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.867004   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:21.867011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:21.867093   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:21.892977   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:21.893000   59960 cri.go:89] found id: ""
	I1126 20:09:21.893008   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:21.893083   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.896906   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:21.897019   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:21.923720   59960 cri.go:89] found id: ""
	I1126 20:09:21.923744   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.923753   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:21.923762   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:21.923793   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:22.011751   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:22.003342    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.003880    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.005519    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.006189    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.007784    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:22.003342    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.003880    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.005519    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.006189    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.007784    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:22.011856   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:22.011890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:22.042091   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:22.042121   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:22.079857   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:22.079886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:22.179933   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:22.179973   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:22.207540   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:22.207568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:22.263434   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:22.263465   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:22.313145   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:22.313180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:22.365142   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:22.365177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:22.446886   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:22.446920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:22.483927   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:22.483961   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:24.996823   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:25.007913   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:25.007987   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:25.044777   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:25.044801   59960 cri.go:89] found id: ""
	I1126 20:09:25.044810   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:25.044870   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.048843   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:25.048923   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:25.083120   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:25.083187   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:25.083197   59960 cri.go:89] found id: ""
	I1126 20:09:25.083205   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:25.083271   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.086865   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.090526   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:25.090596   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:25.118710   59960 cri.go:89] found id: ""
	I1126 20:09:25.118735   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.118745   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:25.118752   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:25.118809   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:25.145818   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:25.145843   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:25.145850   59960 cri.go:89] found id: ""
	I1126 20:09:25.145857   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:25.145956   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.154268   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.159267   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:25.159348   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:25.185977   59960 cri.go:89] found id: ""
	I1126 20:09:25.186002   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.186011   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:25.186017   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:25.186072   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:25.213727   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:25.213751   59960 cri.go:89] found id: ""
	I1126 20:09:25.213760   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:25.213826   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.217850   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:25.217960   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:25.246743   59960 cri.go:89] found id: ""
	I1126 20:09:25.246769   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.246779   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:25.246788   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:25.246800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:25.321227   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:25.312798    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.313456    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315126    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315598    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.317138    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:25.312798    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.313456    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315126    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315598    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.317138    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:25.321251   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:25.321288   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:25.346983   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:25.347011   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:25.407991   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:25.408027   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:25.439857   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:25.439886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:25.467227   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:25.467252   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:25.549334   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:25.549371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:25.590791   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:25.590821   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:25.636096   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:25.636130   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:25.668287   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:25.668314   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:25.765804   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:25.765838   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:28.279160   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:28.290077   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:28.290149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:28.320697   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:28.320720   59960 cri.go:89] found id: ""
	I1126 20:09:28.320729   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:28.320786   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.324391   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:28.324466   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:28.351072   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:28.351094   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:28.351099   59960 cri.go:89] found id: ""
	I1126 20:09:28.351106   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:28.351161   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.355739   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.359260   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:28.359346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:28.386343   59960 cri.go:89] found id: ""
	I1126 20:09:28.386370   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.386383   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:28.386390   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:28.386457   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:28.413613   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:28.413635   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:28.413641   59960 cri.go:89] found id: ""
	I1126 20:09:28.413648   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:28.413701   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.417403   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.420731   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:28.420810   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:28.446127   59960 cri.go:89] found id: ""
	I1126 20:09:28.446202   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.446225   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:28.446245   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:28.446337   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:28.471432   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:28.471454   59960 cri.go:89] found id: ""
	I1126 20:09:28.471462   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:28.471545   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.475058   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:28.475141   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:28.502515   59960 cri.go:89] found id: ""
	I1126 20:09:28.502539   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.502549   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:28.502559   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:28.502570   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:28.514608   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:28.514637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:28.557861   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:28.557890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:28.627880   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:28.627917   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:28.659730   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:28.659757   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:28.725495   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:28.717349    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.718072    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.719611    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.720154    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.722097    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:28.717349    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.718072    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.719611    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.720154    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.722097    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:28.725519   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:28.725532   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:28.763157   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:28.763187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:28.828543   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:28.828573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:28.855674   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:28.855707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:28.888296   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:28.888323   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:28.966101   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:28.966135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:31.560965   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:31.571673   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:31.571744   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:31.601161   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:31.601182   59960 cri.go:89] found id: ""
	I1126 20:09:31.601190   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:31.601269   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.605397   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:31.605476   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:31.631813   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:31.631835   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:31.631841   59960 cri.go:89] found id: ""
	I1126 20:09:31.631848   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:31.631904   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.635710   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.639546   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:31.639621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:31.674540   59960 cri.go:89] found id: ""
	I1126 20:09:31.674569   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.674578   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:31.674585   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:31.674643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:31.705780   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:31.705799   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:31.705803   59960 cri.go:89] found id: ""
	I1126 20:09:31.705810   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:31.705865   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.709862   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.713500   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:31.713591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:31.739394   59960 cri.go:89] found id: ""
	I1126 20:09:31.739419   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.739429   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:31.739435   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:31.739492   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:31.765811   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:31.765834   59960 cri.go:89] found id: ""
	I1126 20:09:31.765842   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:31.765960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.769463   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:31.769554   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:31.802081   59960 cri.go:89] found id: ""
	I1126 20:09:31.802107   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.802116   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:31.802153   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:31.802172   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:31.849273   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:31.849308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:31.902662   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:31.902697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:31.990675   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:31.990710   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:32.022637   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:32.022667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:32.100797   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:32.092180    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.093036    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.094703    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.095415    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.097142    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:32.092180    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.093036    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.094703    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.095415    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.097142    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:32.100820   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:32.100833   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:32.146149   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:32.146184   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:32.172943   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:32.172970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:32.199037   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:32.199063   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:32.306507   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:32.306540   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:32.319193   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:32.319221   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:34.849302   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:34.860158   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:34.860250   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:34.887094   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:34.887113   59960 cri.go:89] found id: ""
	I1126 20:09:34.887121   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:34.887177   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.890890   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:34.890964   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:34.921149   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:34.921177   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:34.921182   59960 cri.go:89] found id: ""
	I1126 20:09:34.921189   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:34.921243   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.924938   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.928493   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:34.928569   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:34.954052   59960 cri.go:89] found id: ""
	I1126 20:09:34.954078   59960 logs.go:282] 0 containers: []
	W1126 20:09:34.954087   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:34.954093   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:34.954206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:34.985031   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:34.985054   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:34.985059   59960 cri.go:89] found id: ""
	I1126 20:09:34.985067   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:34.985121   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.989050   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.992852   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:34.992934   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:35.019287   59960 cri.go:89] found id: ""
	I1126 20:09:35.019314   59960 logs.go:282] 0 containers: []
	W1126 20:09:35.019323   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:35.019330   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:35.019393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:35.049190   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:35.049217   59960 cri.go:89] found id: ""
	I1126 20:09:35.049237   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:35.049313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:35.053627   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:35.053713   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:35.091326   59960 cri.go:89] found id: ""
	I1126 20:09:35.091394   59960 logs.go:282] 0 containers: []
	W1126 20:09:35.091420   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:35.091440   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:35.091476   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:35.188523   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:35.188560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:35.220725   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:35.220755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:35.250614   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:35.250643   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:35.289963   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:35.289995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:35.303153   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:35.303180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:35.375929   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:35.367382    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.368117    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.369869    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.370618    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.372228    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:35.367382    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.368117    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.369869    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.370618    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.372228    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:35.375952   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:35.375968   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:35.403037   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:35.403066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:35.445367   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:35.445402   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:35.491101   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:35.491135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:35.561489   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:35.561524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:38.150634   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:38.161275   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:38.161346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:38.189434   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:38.189461   59960 cri.go:89] found id: ""
	I1126 20:09:38.189469   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:38.189530   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.195206   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:38.195288   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:38.223137   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:38.223160   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:38.223166   59960 cri.go:89] found id: ""
	I1126 20:09:38.223173   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:38.223227   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.226977   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.230547   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:38.230624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:38.255698   59960 cri.go:89] found id: ""
	I1126 20:09:38.255723   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.255732   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:38.255742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:38.255800   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:38.285059   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:38.285082   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:38.285087   59960 cri.go:89] found id: ""
	I1126 20:09:38.285097   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:38.285151   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.288799   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.292713   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:38.292786   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:38.318862   59960 cri.go:89] found id: ""
	I1126 20:09:38.318889   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.318898   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:38.318905   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:38.318963   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:38.346973   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:38.346996   59960 cri.go:89] found id: ""
	I1126 20:09:38.347005   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:38.347057   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.350729   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:38.350856   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:38.378801   59960 cri.go:89] found id: ""
	I1126 20:09:38.378827   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.378836   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:38.378845   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:38.378915   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:38.390980   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:38.391009   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:38.422522   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:38.422550   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:38.469058   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:38.469133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:38.523109   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:38.523182   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:38.559691   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:38.559716   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:38.646468   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:38.646504   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:38.751509   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:38.751551   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:38.836492   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:38.827693    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.828759    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.829560    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.830636    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.831318    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:38.827693    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.828759    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.829560    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.830636    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.831318    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:38.836516   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:38.836528   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:38.876587   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:38.876623   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:38.910948   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:38.910987   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:41.443533   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:41.454798   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:41.454873   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:41.485670   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:41.485699   59960 cri.go:89] found id: ""
	I1126 20:09:41.485707   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:41.485761   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.489619   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:41.489690   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:41.525686   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:41.525710   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:41.525714   59960 cri.go:89] found id: ""
	I1126 20:09:41.525722   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:41.525777   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.536491   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.541670   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:41.541797   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:41.570295   59960 cri.go:89] found id: ""
	I1126 20:09:41.570319   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.570327   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:41.570334   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:41.570393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:41.598145   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:41.598169   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:41.598175   59960 cri.go:89] found id: ""
	I1126 20:09:41.598182   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:41.598258   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.602230   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.606445   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:41.606530   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:41.636614   59960 cri.go:89] found id: ""
	I1126 20:09:41.636637   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.636646   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:41.636652   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:41.636707   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:41.663292   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:41.663315   59960 cri.go:89] found id: ""
	I1126 20:09:41.663327   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:41.663382   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.667194   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:41.667277   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:41.696056   59960 cri.go:89] found id: ""
	I1126 20:09:41.696081   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.696090   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:41.696099   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:41.696110   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:41.794427   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:41.794463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:41.822463   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:41.822493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:41.871566   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:41.871599   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:41.916725   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:41.916759   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:41.950381   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:41.950410   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:41.982658   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:41.982692   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:41.996639   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:41.996672   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:42.087350   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:42.079184    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.079744    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081320    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081972    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.083647    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:42.079184    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.079744    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081320    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081972    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.083647    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:42.087369   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:42.087384   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:42.175919   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:42.176012   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:42.281379   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:42.281406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:44.882212   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:44.893873   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:44.893969   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:44.923663   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:44.923683   59960 cri.go:89] found id: ""
	I1126 20:09:44.923691   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:44.923744   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.927892   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:44.927959   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:44.958403   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:44.958423   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:44.958427   59960 cri.go:89] found id: ""
	I1126 20:09:44.958434   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:44.958486   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.962367   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.966913   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:44.966985   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:45.000482   59960 cri.go:89] found id: ""
	I1126 20:09:45.000503   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.000511   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:45.000517   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:45.000572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:45.031381   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:45.031401   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:45.031406   59960 cri.go:89] found id: ""
	I1126 20:09:45.031414   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:45.031471   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.036637   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.042551   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:45.042723   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:45.086906   59960 cri.go:89] found id: ""
	I1126 20:09:45.086987   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.087026   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:45.087050   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:45.087153   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:45.137504   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:45.137578   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:45.137598   59960 cri.go:89] found id: ""
	I1126 20:09:45.137621   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:45.137715   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.143678   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.149235   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:45.149438   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:45.196979   59960 cri.go:89] found id: ""
	I1126 20:09:45.197063   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.197089   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:45.197146   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:45.197191   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:45.267194   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:45.267280   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:45.386434   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:45.386524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:45.468233   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:45.459943    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.460742    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462336    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462624    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.464644    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:45.459943    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.460742    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462336    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462624    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.464644    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:45.468305   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:45.468342   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:45.541622   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:45.541649   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:45.613664   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:45.613695   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:45.641765   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:45.641794   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:45.702809   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:45.702837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:45.807019   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:45.807056   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:45.820258   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:45.820289   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:45.867345   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:45.867376   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:45.921560   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:45.921596   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:48.454091   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:48.464670   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:48.464755   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:48.493056   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:48.493081   59960 cri.go:89] found id: ""
	I1126 20:09:48.493089   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:48.493144   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.496943   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:48.497007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:48.524995   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:48.525020   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:48.525025   59960 cri.go:89] found id: ""
	I1126 20:09:48.525032   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:48.525085   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.528726   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.532247   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:48.532317   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:48.557862   59960 cri.go:89] found id: ""
	I1126 20:09:48.557887   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.557896   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:48.557902   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:48.557988   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:48.587744   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:48.587765   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:48.587770   59960 cri.go:89] found id: ""
	I1126 20:09:48.587777   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:48.587832   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.591388   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.594875   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:48.594985   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:48.627277   59960 cri.go:89] found id: ""
	I1126 20:09:48.627298   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.627313   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:48.627352   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:48.627433   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:48.664063   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:48.664088   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:48.664102   59960 cri.go:89] found id: ""
	I1126 20:09:48.664110   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:48.664222   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.668219   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.671608   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:48.671680   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:48.700294   59960 cri.go:89] found id: ""
	I1126 20:09:48.700322   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.700331   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:48.700340   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:48.700351   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:48.793887   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:48.793974   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:48.807445   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:48.807472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:48.881133   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:48.873596    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.874156    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.875737    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.876232    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.877299    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:48.873596    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.874156    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.875737    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.876232    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.877299    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:48.881155   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:48.881167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:48.926338   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:48.926370   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:48.980929   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:48.980964   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:49.008703   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:49.008729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:49.035020   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:49.035134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:49.075209   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:49.075239   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:49.102778   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:49.102808   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:49.148209   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:49.148243   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:49.175449   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:49.175477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:51.750461   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:51.761173   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:51.761247   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:51.792174   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:51.792200   59960 cri.go:89] found id: ""
	I1126 20:09:51.792207   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:51.792272   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.796194   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:51.796266   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:51.826309   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:51.826333   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:51.826339   59960 cri.go:89] found id: ""
	I1126 20:09:51.826346   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:51.826408   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.830049   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.833626   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:51.833703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:51.864668   59960 cri.go:89] found id: ""
	I1126 20:09:51.864693   59960 logs.go:282] 0 containers: []
	W1126 20:09:51.864702   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:51.864709   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:51.864770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:51.902154   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:51.902178   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:51.902184   59960 cri.go:89] found id: ""
	I1126 20:09:51.902191   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:51.902244   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.906099   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.909550   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:51.909622   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:51.940956   59960 cri.go:89] found id: ""
	I1126 20:09:51.940984   59960 logs.go:282] 0 containers: []
	W1126 20:09:51.940993   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:51.941000   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:51.941057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:51.967086   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:51.967112   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:51.967117   59960 cri.go:89] found id: ""
	I1126 20:09:51.967125   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:51.967206   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.970992   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.974344   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:51.974463   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:52.006654   59960 cri.go:89] found id: ""
	I1126 20:09:52.006675   59960 logs.go:282] 0 containers: []
	W1126 20:09:52.006684   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:52.006693   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:52.006705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:52.033587   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:52.033621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:52.062777   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:52.062810   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:52.136250   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:52.127112    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.127989    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.129548    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.130437    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.132317    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:52.127112    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.127989    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.129548    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.130437    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.132317    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:52.136279   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:52.136292   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:52.165716   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:52.165792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:52.210120   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:52.210157   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:52.266182   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:52.266228   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:52.296704   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:52.296732   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:52.373394   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:52.373432   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:52.409405   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:52.409436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:52.508717   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:52.508755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:52.520510   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:52.520577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.069988   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:55.081385   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:55.081477   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:55.109272   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:55.109297   59960 cri.go:89] found id: ""
	I1126 20:09:55.109306   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:55.109393   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.113332   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:55.113409   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:55.144644   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.144728   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:55.144749   59960 cri.go:89] found id: ""
	I1126 20:09:55.144782   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:55.144860   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.148962   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.153598   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:55.153724   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:55.180168   59960 cri.go:89] found id: ""
	I1126 20:09:55.180235   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.180274   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:55.180302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:55.180378   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:55.207578   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:55.207606   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:55.207611   59960 cri.go:89] found id: ""
	I1126 20:09:55.207621   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:55.207698   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.211665   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.215295   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:55.215371   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:55.243201   59960 cri.go:89] found id: ""
	I1126 20:09:55.243228   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.243237   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:55.243243   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:55.243299   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:55.273345   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:55.273370   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:55.273375   59960 cri.go:89] found id: ""
	I1126 20:09:55.273382   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:55.273434   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.277156   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.280557   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:55.280629   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:55.306973   59960 cri.go:89] found id: ""
	I1126 20:09:55.307037   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.307052   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:55.307061   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:55.307072   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:55.405440   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:55.405474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:55.418598   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:55.418628   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:55.487261   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:55.479261    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.479915    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481393    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481846    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.483618    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:55.479261    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.479915    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481393    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481846    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.483618    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:55.487286   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:55.487299   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:55.531555   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:55.531626   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:55.601020   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:55.601057   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:55.632319   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:55.632347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:55.660851   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:55.660881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:55.742963   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:55.742998   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:55.773047   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:55.773076   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.826960   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:55.826991   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:55.855917   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:55.855944   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:58.399772   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:58.415975   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:58.416043   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:58.442760   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:58.442782   59960 cri.go:89] found id: ""
	I1126 20:09:58.442792   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:58.442850   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.446527   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:58.446620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:58.476049   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:58.476071   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:58.476076   59960 cri.go:89] found id: ""
	I1126 20:09:58.476084   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:58.476141   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.480019   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.483716   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:58.483799   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:58.514116   59960 cri.go:89] found id: ""
	I1126 20:09:58.514138   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.514147   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:58.514153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:58.514220   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:58.547211   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:58.547233   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:58.547239   59960 cri.go:89] found id: ""
	I1126 20:09:58.547257   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:58.547342   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.551299   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.554848   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:58.554921   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:58.583768   59960 cri.go:89] found id: ""
	I1126 20:09:58.583793   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.583802   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:58.583809   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:58.583865   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:58.611601   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:58.611635   59960 cri.go:89] found id: ""
	I1126 20:09:58.611644   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:09:58.611703   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.615732   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:58.615802   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:58.646048   59960 cri.go:89] found id: ""
	I1126 20:09:58.646087   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.646096   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:58.646106   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:58.646135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:58.745296   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:58.745332   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:58.820265   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:58.811642    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.812262    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.813785    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.814448    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.815924    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:58.811642    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.812262    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.813785    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.814448    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.815924    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:58.820294   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:58.820308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:58.877523   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:58.877556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:58.904630   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:58.904656   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:58.980105   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:58.980138   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:58.992220   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:58.992248   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:59.019086   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:59.019112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:59.058229   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:59.058260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:59.106394   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:59.106427   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:59.134445   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:59.134474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:01.667677   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:01.679153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:01.679227   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:01.713101   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:01.713122   59960 cri.go:89] found id: ""
	I1126 20:10:01.713130   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:01.713185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.717042   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:01.717117   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:01.748792   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:01.748817   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:01.748823   59960 cri.go:89] found id: ""
	I1126 20:10:01.748832   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:01.748889   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.752752   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.756411   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:01.756487   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:01.785898   59960 cri.go:89] found id: ""
	I1126 20:10:01.785954   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.785964   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:01.785971   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:01.786033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:01.817470   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:01.817496   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:01.817502   59960 cri.go:89] found id: ""
	I1126 20:10:01.817509   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:01.817567   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.821688   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.826052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:01.826203   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:01.856542   59960 cri.go:89] found id: ""
	I1126 20:10:01.856568   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.856590   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:01.856620   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:01.856742   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:01.893138   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:01.893218   59960 cri.go:89] found id: ""
	I1126 20:10:01.893242   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:01.893337   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.897863   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:01.898026   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:01.935921   59960 cri.go:89] found id: ""
	I1126 20:10:01.935951   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.935961   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:01.935971   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:01.935985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:01.973303   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:01.973332   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:02.028454   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:02.028493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:02.074241   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:02.074272   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:02.162898   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:02.162936   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:02.176057   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:02.176088   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:02.235629   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:02.235665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:02.306607   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:02.306643   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:02.337699   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:02.337729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:02.374553   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:02.374582   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:02.481202   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:02.481238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:02.563313   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:02.555444    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.556211    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.557668    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.558242    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.559786    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:02.555444    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.556211    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.557668    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.558242    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.559786    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:05.064305   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:05.075852   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:05.075925   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:05.108322   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:05.108345   59960 cri.go:89] found id: ""
	I1126 20:10:05.108354   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:05.108410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.112382   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:05.112460   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:05.140946   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:05.141021   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:05.141040   59960 cri.go:89] found id: ""
	I1126 20:10:05.141063   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:05.141150   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.145278   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.148898   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:05.148974   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:05.176423   59960 cri.go:89] found id: ""
	I1126 20:10:05.176450   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.176459   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:05.176466   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:05.176527   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:05.204990   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:05.205013   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:05.205018   59960 cri.go:89] found id: ""
	I1126 20:10:05.205026   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:05.205088   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.208959   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.212627   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:05.212730   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:05.239581   59960 cri.go:89] found id: ""
	I1126 20:10:05.239604   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.239614   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:05.239620   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:05.239679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:05.268087   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:05.268110   59960 cri.go:89] found id: ""
	I1126 20:10:05.268119   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:05.268176   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.271819   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:05.271923   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:05.298753   59960 cri.go:89] found id: ""
	I1126 20:10:05.298819   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.298833   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:05.298843   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:05.298855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:05.325518   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:05.325548   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:05.376406   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:05.376438   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:05.428781   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:05.428943   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:05.459754   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:05.459786   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:05.487550   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:05.487581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:05.520035   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:05.520071   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:05.616425   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:05.616503   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:05.630189   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:05.630221   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:05.715272   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:05.705315    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.706188    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708012    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708749    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.710497    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:05.705315    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.706188    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708012    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708749    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.710497    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:05.715301   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:05.715315   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:05.768473   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:05.768507   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:08.349688   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:08.360619   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:08.360693   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:08.388583   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:08.388610   59960 cri.go:89] found id: ""
	I1126 20:10:08.388619   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:08.388678   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.392264   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:08.392334   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:08.418523   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:08.418549   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:08.418554   59960 cri.go:89] found id: ""
	I1126 20:10:08.418562   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:08.418621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.422368   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.425851   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:08.425954   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:08.456520   59960 cri.go:89] found id: ""
	I1126 20:10:08.456546   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.456555   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:08.456562   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:08.456620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:08.487158   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:08.487182   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:08.487186   59960 cri.go:89] found id: ""
	I1126 20:10:08.487195   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:08.487268   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.491193   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.494690   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:08.494760   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:08.523674   59960 cri.go:89] found id: ""
	I1126 20:10:08.523699   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.523708   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:08.523715   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:08.523773   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:08.569422   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:08.569442   59960 cri.go:89] found id: ""
	I1126 20:10:08.569449   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:08.569505   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.572997   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:08.573065   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:08.599736   59960 cri.go:89] found id: ""
	I1126 20:10:08.599763   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.599772   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:08.599781   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:08.599799   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:08.674461   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:08.665974    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.666705    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.668447    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.669108    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.670766    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:08.665974    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.666705    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.668447    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.669108    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.670766    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:08.674482   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:08.674495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:08.726546   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:08.726591   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:08.783639   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:08.783690   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:08.860709   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:08.860759   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:08.873030   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:08.873058   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:08.899170   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:08.899199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:08.940773   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:08.940855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:08.969671   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:08.969762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:09.001544   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:09.001621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:09.035799   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:09.035837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:11.634159   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:11.645145   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:11.645262   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:11.684091   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:11.684113   59960 cri.go:89] found id: ""
	I1126 20:10:11.684121   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:11.684198   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.687930   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:11.688002   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:11.716342   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:11.716366   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:11.716372   59960 cri.go:89] found id: ""
	I1126 20:10:11.716380   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:11.716438   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.720592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.724106   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:11.724181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:11.750971   59960 cri.go:89] found id: ""
	I1126 20:10:11.750997   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.751007   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:11.751014   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:11.751140   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:11.778888   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:11.778912   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:11.778917   59960 cri.go:89] found id: ""
	I1126 20:10:11.778924   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:11.778979   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.782704   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.786153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:11.786245   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:11.812859   59960 cri.go:89] found id: ""
	I1126 20:10:11.812924   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.812953   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:11.812972   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:11.813047   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:11.844995   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:11.845065   59960 cri.go:89] found id: ""
	I1126 20:10:11.845089   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:11.845159   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.848928   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:11.849056   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:11.878557   59960 cri.go:89] found id: ""
	I1126 20:10:11.878634   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.878657   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:11.878674   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:11.878686   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:11.911996   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:11.912024   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:11.957531   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:11.957700   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:12.002561   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:12.002600   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:12.037611   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:12.037655   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:12.124659   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:12.124695   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:12.157527   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:12.157559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:12.255561   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:12.255597   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:12.270701   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:12.270727   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:12.344084   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:12.335378    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.336132    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.337729    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.338527    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.340203    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:12.335378    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.336132    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.337729    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.338527    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.340203    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:12.344111   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:12.344127   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:12.414064   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:12.414099   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:14.957062   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:14.971279   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:14.971358   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:15.002850   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:15.002871   59960 cri.go:89] found id: ""
	I1126 20:10:15.002879   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:15.002953   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.007210   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:15.007317   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:15.044904   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:15.044929   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:15.044934   59960 cri.go:89] found id: ""
	I1126 20:10:15.044943   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:15.045037   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.050180   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.055192   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:15.055293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:15.087772   59960 cri.go:89] found id: ""
	I1126 20:10:15.087798   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.087815   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:15.087822   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:15.087883   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:15.117095   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:15.117114   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:15.117119   59960 cri.go:89] found id: ""
	I1126 20:10:15.117127   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:15.117185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.120995   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.124760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:15.124885   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:15.157854   59960 cri.go:89] found id: ""
	I1126 20:10:15.157954   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.157994   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:15.158017   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:15.158084   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:15.190383   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:15.190407   59960 cri.go:89] found id: ""
	I1126 20:10:15.190417   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:15.190474   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.194524   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:15.194624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:15.223311   59960 cri.go:89] found id: ""
	I1126 20:10:15.223337   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.223346   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:15.223355   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:15.223366   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:15.236105   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:15.236134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:15.263408   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:15.263436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:15.308099   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:15.308133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:15.370222   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:15.370258   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:15.412978   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:15.413009   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:15.482330   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:15.473679    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.474420    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476124    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476749    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.478398    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:15.473679    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.474420    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476124    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476749    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.478398    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:15.482403   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:15.482428   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:15.528305   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:15.528335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:15.564111   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:15.564138   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:15.592541   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:15.592569   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:15.673319   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:15.673357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:18.279646   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:18.290358   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:18.290427   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:18.319136   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:18.319159   59960 cri.go:89] found id: ""
	I1126 20:10:18.319168   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:18.319225   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.322893   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:18.322967   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:18.350092   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:18.350120   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:18.350126   59960 cri.go:89] found id: ""
	I1126 20:10:18.350139   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:18.350193   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.354777   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.358503   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:18.358602   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:18.396162   59960 cri.go:89] found id: ""
	I1126 20:10:18.396185   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.396193   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:18.396199   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:18.396262   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:18.430093   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:18.430119   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:18.430124   59960 cri.go:89] found id: ""
	I1126 20:10:18.430131   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:18.430196   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.434456   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.438374   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:18.438451   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:18.478030   59960 cri.go:89] found id: ""
	I1126 20:10:18.478058   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.478070   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:18.478076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:18.478137   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:18.506317   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:18.506340   59960 cri.go:89] found id: ""
	I1126 20:10:18.506349   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:18.506410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.510476   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:18.510552   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:18.550337   59960 cri.go:89] found id: ""
	I1126 20:10:18.550408   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.550436   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:18.550454   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:18.550487   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:18.621602   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:18.613602    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.614230    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.615899    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.616339    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.617881    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:18.613602    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.614230    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.615899    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.616339    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.617881    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:18.621625   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:18.621638   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:18.648795   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:18.648824   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:18.691314   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:18.691358   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:18.771327   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:18.771367   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:18.808287   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:18.808319   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:18.907011   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:18.907048   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:18.919575   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:18.919605   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:18.961664   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:18.961697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:19.020056   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:19.020092   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:19.050179   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:19.050206   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:21.599106   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:21.611209   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:21.611309   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:21.639207   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:21.639229   59960 cri.go:89] found id: ""
	I1126 20:10:21.639238   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:21.639296   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.643290   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:21.643365   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:21.675608   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:21.675633   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:21.675639   59960 cri.go:89] found id: ""
	I1126 20:10:21.675648   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:21.675702   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.679772   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.683385   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:21.683511   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:21.719004   59960 cri.go:89] found id: ""
	I1126 20:10:21.719078   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.719102   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:21.719123   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:21.719196   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:21.745555   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:21.745634   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:21.745660   59960 cri.go:89] found id: ""
	I1126 20:10:21.745681   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:21.745750   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.750313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.753830   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:21.753907   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:21.781119   59960 cri.go:89] found id: ""
	I1126 20:10:21.781199   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.781222   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:21.781243   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:21.781347   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:21.809894   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:21.810006   59960 cri.go:89] found id: ""
	I1126 20:10:21.810022   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:21.810092   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.813756   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:21.813853   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:21.840725   59960 cri.go:89] found id: ""
	I1126 20:10:21.840751   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.840760   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:21.840769   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:21.840781   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:21.854145   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:21.854177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:21.884873   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:21.884902   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:21.936427   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:21.936463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:21.990170   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:21.990205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:22.077016   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:22.077064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:22.106941   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:22.106974   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:22.136672   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:22.136703   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:22.235594   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:22.235630   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:22.305008   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:22.295860    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.296666    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.298548    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.299084    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.300765    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:22.295860    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.296666    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.298548    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.299084    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.300765    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:22.305032   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:22.305046   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:22.378673   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:22.378711   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:24.920612   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:24.931941   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:24.932015   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:24.958956   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:24.958979   59960 cri.go:89] found id: ""
	I1126 20:10:24.958988   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:24.959047   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.962853   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:24.962931   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:24.989108   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:24.989130   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:24.989134   59960 cri.go:89] found id: ""
	I1126 20:10:24.989141   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:24.989195   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.992756   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.996360   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:24.996431   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:25.023636   59960 cri.go:89] found id: ""
	I1126 20:10:25.023660   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.023670   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:25.023676   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:25.023751   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:25.056300   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:25.056325   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:25.056331   59960 cri.go:89] found id: ""
	I1126 20:10:25.056339   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:25.056407   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.060822   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.066693   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:25.066825   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:25.098171   59960 cri.go:89] found id: ""
	I1126 20:10:25.098239   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.098258   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:25.098265   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:25.098344   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:25.129634   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:25.129655   59960 cri.go:89] found id: ""
	I1126 20:10:25.129664   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:25.129759   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.134599   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:25.134715   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:25.166870   59960 cri.go:89] found id: ""
	I1126 20:10:25.166896   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.166905   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:25.166918   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:25.166931   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:25.201303   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:25.201335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:25.234106   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:25.234132   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:25.335293   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:25.335329   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:25.367895   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:25.367920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:25.408499   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:25.408540   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:25.489459   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:25.489496   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:25.525614   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:25.525642   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:25.540937   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:25.541079   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:25.619457   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:25.611129    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.611986    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.613567    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.614319    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.615842    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:25.611129    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.611986    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.613567    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.614319    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.615842    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:25.619480   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:25.619494   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:25.667380   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:25.667419   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.233076   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:28.244698   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:28.244770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:28.272507   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:28.272530   59960 cri.go:89] found id: ""
	I1126 20:10:28.272538   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:28.272596   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.276257   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:28.276333   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:28.303315   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:28.303337   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:28.303342   59960 cri.go:89] found id: ""
	I1126 20:10:28.303349   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:28.303429   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.307300   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.310655   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:28.310727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:28.337118   59960 cri.go:89] found id: ""
	I1126 20:10:28.337140   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.337150   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:28.337156   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:28.337214   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:28.364328   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.364352   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:28.364358   59960 cri.go:89] found id: ""
	I1126 20:10:28.364374   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:28.364436   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.368741   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.372299   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:28.372385   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:28.398315   59960 cri.go:89] found id: ""
	I1126 20:10:28.398342   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.398351   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:28.398357   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:28.398418   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:28.426255   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:28.426276   59960 cri.go:89] found id: ""
	I1126 20:10:28.426287   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:28.426342   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.429863   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:28.430017   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:28.456908   59960 cri.go:89] found id: ""
	I1126 20:10:28.456933   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.456942   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:28.456951   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:28.456962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:28.532783   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:28.532820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:28.637119   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:28.637160   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:28.711269   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:28.702783    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.703978    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.704633    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706176    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706692    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:28.702783    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.703978    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.704633    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706176    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706692    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:28.711288   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:28.711304   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:28.737855   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:28.737883   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:28.789442   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:28.789477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:28.820705   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:28.820738   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:28.855530   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:28.855560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:28.868297   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:28.868324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:28.913639   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:28.913673   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.973350   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:28.973386   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:31.500924   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:31.511869   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:31.511943   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:31.546414   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:31.546447   59960 cri.go:89] found id: ""
	I1126 20:10:31.546456   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:31.546559   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.550296   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:31.550368   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:31.577840   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:31.577859   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:31.577864   59960 cri.go:89] found id: ""
	I1126 20:10:31.577870   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:31.577967   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.581789   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.585352   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:31.585421   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:31.616396   59960 cri.go:89] found id: ""
	I1126 20:10:31.616419   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.616428   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:31.616435   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:31.616491   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:31.641907   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:31.641971   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:31.641977   59960 cri.go:89] found id: ""
	I1126 20:10:31.641984   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:31.642048   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.645886   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.649651   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:31.649732   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:31.682488   59960 cri.go:89] found id: ""
	I1126 20:10:31.682512   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.682521   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:31.682527   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:31.682597   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:31.713608   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:31.713632   59960 cri.go:89] found id: ""
	I1126 20:10:31.713641   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:31.713693   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.717274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:31.717349   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:31.750907   59960 cri.go:89] found id: ""
	I1126 20:10:31.750934   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.750948   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:31.750957   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:31.750970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:31.822403   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:31.813458    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.814237    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.815876    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.816493    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.818239    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:31.813458    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.814237    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.815876    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.816493    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.818239    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:31.822425   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:31.822440   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:31.849676   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:31.849705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:31.891923   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:31.891959   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:31.944564   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:31.944608   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:32.015493   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:32.015577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:32.047447   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:32.047480   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:32.127183   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:32.127225   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:32.229734   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:32.229767   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:32.243678   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:32.243719   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:32.271264   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:32.271291   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:34.809253   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:34.819692   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:34.819817   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:34.846220   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:34.846240   59960 cri.go:89] found id: ""
	I1126 20:10:34.846248   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:34.846302   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.849960   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:34.850035   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:34.875486   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:34.875510   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:34.875515   59960 cri.go:89] found id: ""
	I1126 20:10:34.875522   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:34.875591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.879655   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.883266   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:34.883341   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:34.910257   59960 cri.go:89] found id: ""
	I1126 20:10:34.910286   59960 logs.go:282] 0 containers: []
	W1126 20:10:34.910295   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:34.910302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:34.910359   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:34.936501   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:34.936526   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:34.936531   59960 cri.go:89] found id: ""
	I1126 20:10:34.936539   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:34.936602   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.940297   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.943886   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:34.943960   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:34.970440   59960 cri.go:89] found id: ""
	I1126 20:10:34.970467   59960 logs.go:282] 0 containers: []
	W1126 20:10:34.970476   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:34.970482   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:34.970540   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:34.996813   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:34.996833   59960 cri.go:89] found id: ""
	I1126 20:10:34.996842   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:34.996901   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:35.000962   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:35.001030   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:35.029207   59960 cri.go:89] found id: ""
	I1126 20:10:35.029229   59960 logs.go:282] 0 containers: []
	W1126 20:10:35.029237   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:35.029247   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:35.029259   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:35.089280   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:35.089316   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:35.137518   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:35.137557   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:35.198701   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:35.198741   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:35.226526   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:35.226560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:35.308302   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:35.308341   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:35.411713   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:35.411751   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:35.425089   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:35.425118   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:35.496500   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:35.487044    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.487890    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.489861    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.490651    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.492443    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:35.487044    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.487890    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.489861    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.490651    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.492443    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:35.496523   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:35.496538   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:35.521713   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:35.521740   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:35.552491   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:35.552520   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:38.092147   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:38.105386   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:38.105494   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:38.134115   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:38.134183   59960 cri.go:89] found id: ""
	I1126 20:10:38.134204   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:38.134297   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.138342   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:38.138463   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:38.165373   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:38.165448   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:38.165468   59960 cri.go:89] found id: ""
	I1126 20:10:38.165492   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:38.165591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.169464   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.173100   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:38.173220   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:38.201795   59960 cri.go:89] found id: ""
	I1126 20:10:38.201818   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.201826   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:38.201836   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:38.201895   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:38.234752   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:38.234776   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:38.234782   59960 cri.go:89] found id: ""
	I1126 20:10:38.234789   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:38.234845   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.239023   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.242779   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:38.242854   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:38.271155   59960 cri.go:89] found id: ""
	I1126 20:10:38.271184   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.271193   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:38.271200   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:38.271261   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:38.298657   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:38.298682   59960 cri.go:89] found id: ""
	I1126 20:10:38.298691   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:38.298766   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.302858   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:38.302929   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:38.330494   59960 cri.go:89] found id: ""
	I1126 20:10:38.330520   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.330529   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:38.330538   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:38.330570   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:38.356340   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:38.356374   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:38.401509   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:38.401542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:38.463681   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:38.463719   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:38.496848   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:38.496881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:38.524848   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:38.524875   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:38.607033   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:38.607098   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:38.709803   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:38.709840   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:38.722963   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:38.722995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:38.796592   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:38.787909    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.788704    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.790425    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.791012    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.792912    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:38.787909    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.788704    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.790425    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.791012    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.792912    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:38.796617   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:38.796635   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:38.836671   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:38.836707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:41.373598   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:41.384711   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:41.384792   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:41.414012   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:41.414038   59960 cri.go:89] found id: ""
	I1126 20:10:41.414047   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:41.414103   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.417961   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:41.418036   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:41.450051   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:41.450076   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:41.450082   59960 cri.go:89] found id: ""
	I1126 20:10:41.450089   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:41.450147   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.455240   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.459174   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:41.459275   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:41.487216   59960 cri.go:89] found id: ""
	I1126 20:10:41.487241   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.487250   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:41.487257   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:41.487340   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:41.515666   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:41.515739   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:41.515751   59960 cri.go:89] found id: ""
	I1126 20:10:41.515759   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:41.515817   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.519735   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.523565   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:41.523639   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:41.554213   59960 cri.go:89] found id: ""
	I1126 20:10:41.554240   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.554250   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:41.554256   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:41.554321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:41.584766   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:41.584790   59960 cri.go:89] found id: ""
	I1126 20:10:41.584799   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:41.584861   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.589437   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:41.589510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:41.616610   59960 cri.go:89] found id: ""
	I1126 20:10:41.616638   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.616648   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:41.616657   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:41.616669   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:41.696316   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:41.696352   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:41.765798   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:41.758434    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.758824    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760333    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760643    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.762180    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:41.758434    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.758824    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760333    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760643    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.762180    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:41.765870   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:41.765900   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:41.791490   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:41.791517   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:41.827993   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:41.828022   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:41.854480   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:41.854511   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:41.885603   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:41.885632   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:41.984936   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:41.984970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:41.997672   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:41.997701   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:42.039613   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:42.039668   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:42.100317   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:42.100359   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:44.745690   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:44.756208   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:44.756277   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:44.793586   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:44.793606   59960 cri.go:89] found id: ""
	I1126 20:10:44.793614   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:44.793666   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.797466   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:44.797561   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:44.823288   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:44.823313   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:44.823319   59960 cri.go:89] found id: ""
	I1126 20:10:44.823326   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:44.823383   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.828270   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.832190   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:44.832260   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:44.858643   59960 cri.go:89] found id: ""
	I1126 20:10:44.858694   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.858704   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:44.858711   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:44.858772   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:44.887625   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:44.887711   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:44.887722   59960 cri.go:89] found id: ""
	I1126 20:10:44.887730   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:44.887791   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.891593   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.895076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:44.895151   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:44.924994   59960 cri.go:89] found id: ""
	I1126 20:10:44.925060   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.925085   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:44.925104   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:44.925196   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:44.951783   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:44.951807   59960 cri.go:89] found id: ""
	I1126 20:10:44.951816   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:44.951874   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.955505   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:44.955620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:44.982789   59960 cri.go:89] found id: ""
	I1126 20:10:44.982814   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.982822   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:44.982831   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:44.982843   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:45.010557   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:45.010586   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:45.141549   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:45.141632   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:45.253485   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:45.253554   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:45.353619   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:45.353660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:45.408761   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:45.408795   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:45.443664   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:45.443692   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:45.470742   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:45.470773   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:45.504515   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:45.504544   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:45.608220   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:45.608254   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:45.620732   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:45.620761   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:45.707896   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:45.695026    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.696388    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.697297    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.699791    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.700340    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:45.695026    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.696388    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.697297    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.699791    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.700340    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:48.209609   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:48.220742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:48.220811   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:48.247863   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:48.247886   59960 cri.go:89] found id: ""
	I1126 20:10:48.247894   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:48.247949   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.251929   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:48.251997   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:48.280449   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:48.280470   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:48.280475   59960 cri.go:89] found id: ""
	I1126 20:10:48.280483   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:48.280537   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.284732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.288315   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:48.288405   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:48.316409   59960 cri.go:89] found id: ""
	I1126 20:10:48.316432   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.316440   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:48.316446   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:48.316506   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:48.349208   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:48.349271   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:48.349289   59960 cri.go:89] found id: ""
	I1126 20:10:48.349316   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:48.349408   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.354353   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.357751   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:48.357848   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:48.385059   59960 cri.go:89] found id: ""
	I1126 20:10:48.385081   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.385090   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:48.385107   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:48.385185   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:48.411304   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:48.411326   59960 cri.go:89] found id: ""
	I1126 20:10:48.411334   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:48.411405   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.415053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:48.415156   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:48.441024   59960 cri.go:89] found id: ""
	I1126 20:10:48.441046   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.441055   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:48.441063   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:48.441075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:48.469644   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:48.469672   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:48.510776   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:48.510859   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:48.592885   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:48.592917   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:48.620191   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:48.620216   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:48.715671   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:48.715746   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:48.730976   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:48.731004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:48.784446   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:48.784483   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:48.816189   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:48.816220   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:48.894569   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:48.894607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:48.934181   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:48.934214   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:49.000322   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:48.992247    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.992990    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994167    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994648    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.996101    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:48.992247    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.992990    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994167    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994648    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.996101    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:51.500568   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:51.512500   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:51.512570   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:51.550166   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:51.550188   59960 cri.go:89] found id: ""
	I1126 20:10:51.550196   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:51.550253   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.554115   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:51.554221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:51.580857   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:51.580880   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:51.580885   59960 cri.go:89] found id: ""
	I1126 20:10:51.580893   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:51.580949   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.584903   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.588661   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:51.588730   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:51.620121   59960 cri.go:89] found id: ""
	I1126 20:10:51.620147   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.620156   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:51.620163   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:51.620225   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:51.648043   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:51.648066   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:51.648071   59960 cri.go:89] found id: ""
	I1126 20:10:51.648079   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:51.648144   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.652146   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.656590   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:51.656658   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:51.684798   59960 cri.go:89] found id: ""
	I1126 20:10:51.684825   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.684835   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:51.684842   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:51.684900   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:51.712247   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:51.712270   59960 cri.go:89] found id: ""
	I1126 20:10:51.712279   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:51.712334   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.716105   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:51.716235   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:51.755296   59960 cri.go:89] found id: ""
	I1126 20:10:51.755373   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.755389   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:51.755400   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:51.755412   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:51.782840   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:51.782871   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:51.826403   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:51.826436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:51.894112   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:51.894148   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:51.920185   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:51.920212   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:51.993815   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:51.993856   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:52.030774   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:52.030804   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:52.112821   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:52.103396    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.104540    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.105295    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.106939    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.107489    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:52.103396    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.104540    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.105295    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.106939    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.107489    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:52.112847   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:52.112861   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:52.161738   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:52.161771   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:52.193340   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:52.193368   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:52.291814   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:52.291862   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:54.810104   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:54.820898   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:54.820971   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:54.849431   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:54.849454   59960 cri.go:89] found id: ""
	I1126 20:10:54.849462   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:54.849524   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.853394   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:54.853465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:54.879833   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:54.879855   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:54.879860   59960 cri.go:89] found id: ""
	I1126 20:10:54.879867   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:54.879926   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.883636   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.887200   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:54.887280   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:54.913349   59960 cri.go:89] found id: ""
	I1126 20:10:54.913374   59960 logs.go:282] 0 containers: []
	W1126 20:10:54.913382   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:54.913389   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:54.913446   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:54.941189   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:54.941215   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:54.941221   59960 cri.go:89] found id: ""
	I1126 20:10:54.941229   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:54.941285   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.945133   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.948594   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:54.948673   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:54.977649   59960 cri.go:89] found id: ""
	I1126 20:10:54.977677   59960 logs.go:282] 0 containers: []
	W1126 20:10:54.977687   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:54.977693   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:54.977768   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:55.008912   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:55.008938   59960 cri.go:89] found id: ""
	I1126 20:10:55.008948   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:55.009005   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:55.012659   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:55.012727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:55.056313   59960 cri.go:89] found id: ""
	I1126 20:10:55.056393   59960 logs.go:282] 0 containers: []
	W1126 20:10:55.056419   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:55.056449   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:55.056478   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:55.170137   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:55.170180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:55.194458   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:55.194489   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:55.279906   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:55.272019    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.272480    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274150    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274543    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.276078    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:55.272019    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.272480    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274150    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274543    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.276078    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:55.279931   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:55.279945   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:55.321902   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:55.321949   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:55.351446   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:55.351474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:55.426688   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:55.426723   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:55.463472   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:55.463501   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:55.510565   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:55.510598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:55.580501   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:55.580534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:55.614574   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:55.614602   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:58.162969   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:58.173910   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:58.174019   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:58.202329   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:58.202352   59960 cri.go:89] found id: ""
	I1126 20:10:58.202360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:58.202415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.206274   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:58.206347   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:58.233721   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:58.233741   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:58.233745   59960 cri.go:89] found id: ""
	I1126 20:10:58.233753   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:58.233811   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.237802   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.242346   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:58.242419   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:58.271013   59960 cri.go:89] found id: ""
	I1126 20:10:58.271038   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.271047   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:58.271053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:58.271109   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:58.298515   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:58.298538   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:58.298553   59960 cri.go:89] found id: ""
	I1126 20:10:58.298560   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:58.298617   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.302497   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.306172   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:58.306241   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:58.331672   59960 cri.go:89] found id: ""
	I1126 20:10:58.331698   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.331707   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:58.331714   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:58.331819   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:58.359197   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:58.359219   59960 cri.go:89] found id: ""
	I1126 20:10:58.359228   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:58.359307   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.363274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:58.363346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:58.403777   59960 cri.go:89] found id: ""
	I1126 20:10:58.403804   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.403814   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:58.403829   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:58.403890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:58.504667   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:58.504702   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:58.517722   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:58.517750   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:58.589740   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:58.581328    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.582205    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.583896    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.584218    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.585780    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:58.581328    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.582205    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.583896    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.584218    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.585780    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:58.589761   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:58.589774   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:58.617621   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:58.617648   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:58.660238   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:58.660281   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:58.709585   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:58.709624   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:58.783550   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:58.783586   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:58.820181   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:58.820219   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:58.848533   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:58.848564   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:58.921350   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:58.921390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:01.453687   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:01.467262   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:01.467365   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:01.498662   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:01.498715   59960 cri.go:89] found id: ""
	I1126 20:11:01.498724   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:01.498785   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.504322   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:01.504445   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:01.545072   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:01.545098   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:01.545105   59960 cri.go:89] found id: ""
	I1126 20:11:01.545113   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:01.545185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.548993   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.552685   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:01.552797   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:01.582855   59960 cri.go:89] found id: ""
	I1126 20:11:01.582881   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.582891   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:01.582897   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:01.582954   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:01.613527   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:01.613548   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:01.613553   59960 cri.go:89] found id: ""
	I1126 20:11:01.613560   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:01.613629   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.618859   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.623550   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:01.623624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:01.660116   59960 cri.go:89] found id: ""
	I1126 20:11:01.660140   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.660149   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:01.660159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:01.660221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:01.692418   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:01.692442   59960 cri.go:89] found id: ""
	I1126 20:11:01.692450   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:01.692509   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.696379   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:01.696453   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:01.729407   59960 cri.go:89] found id: ""
	I1126 20:11:01.729430   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.729439   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:01.729447   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:01.729463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:01.784458   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:01.784492   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:01.872850   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:01.872886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:01.903039   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:01.903068   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:01.942057   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:01.942084   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:02.024475   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:02.024514   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:02.128096   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:02.128133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:02.199528   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:02.191565    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.192150    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.193873    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.194411    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.195999    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:02.191565    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.192150    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.193873    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.194411    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.195999    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:02.199554   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:02.199568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:02.226949   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:02.226985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:02.270517   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:02.270555   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:02.306879   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:02.306948   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:04.822921   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:04.834951   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:04.835018   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:04.862163   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:04.862219   59960 cri.go:89] found id: ""
	I1126 20:11:04.862244   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:04.862312   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.865957   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:04.866029   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:04.895638   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:04.895658   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:04.895663   59960 cri.go:89] found id: ""
	I1126 20:11:04.895669   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:04.895722   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.899645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.903838   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:04.903909   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:04.929326   59960 cri.go:89] found id: ""
	I1126 20:11:04.929389   59960 logs.go:282] 0 containers: []
	W1126 20:11:04.929422   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:04.929442   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:04.929522   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:04.956401   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:04.956472   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:04.956491   59960 cri.go:89] found id: ""
	I1126 20:11:04.956522   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:04.956593   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.960195   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.963812   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:04.963930   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:04.990366   59960 cri.go:89] found id: ""
	I1126 20:11:04.990387   59960 logs.go:282] 0 containers: []
	W1126 20:11:04.990395   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:04.990402   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:04.990468   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:05.019718   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:05.019752   59960 cri.go:89] found id: ""
	I1126 20:11:05.019762   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:05.019824   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:05.023681   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:05.023779   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:05.053886   59960 cri.go:89] found id: ""
	I1126 20:11:05.053915   59960 logs.go:282] 0 containers: []
	W1126 20:11:05.053953   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:05.053963   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:05.053994   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:05.152926   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:05.152963   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:05.165506   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:05.165534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:05.194915   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:05.194945   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:05.235104   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:05.235137   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:05.285215   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:05.285247   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:05.314134   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:05.314162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:05.341007   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:05.341034   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:05.418277   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:05.418313   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:05.491273   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:05.482790    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.483758    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.485510    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.486097    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.487714    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:05.482790    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.483758    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.485510    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.486097    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.487714    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:05.491294   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:05.491308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:05.552151   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:05.552187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:08.086064   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:08.097504   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:08.097574   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:08.126757   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:08.126780   59960 cri.go:89] found id: ""
	I1126 20:11:08.126789   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:08.126851   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.131043   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:08.131119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:08.158212   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:08.158274   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:08.158289   59960 cri.go:89] found id: ""
	I1126 20:11:08.158297   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:08.158360   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.162104   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.166980   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:08.167053   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:08.193258   59960 cri.go:89] found id: ""
	I1126 20:11:08.193290   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.193300   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:08.193307   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:08.193374   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:08.219187   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:08.219210   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:08.219216   59960 cri.go:89] found id: ""
	I1126 20:11:08.219234   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:08.219313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.223489   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.227150   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:08.227228   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:08.255318   59960 cri.go:89] found id: ""
	I1126 20:11:08.255340   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.255348   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:08.255355   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:08.255411   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:08.282171   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:08.282194   59960 cri.go:89] found id: ""
	I1126 20:11:08.282202   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:08.282273   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.285788   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:08.285852   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:08.315430   59960 cri.go:89] found id: ""
	I1126 20:11:08.315505   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.315538   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:08.315560   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:08.315580   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:08.345199   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:08.345268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:08.441184   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:08.441220   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:08.511176   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:08.500509    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.501151    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504004    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504546    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.506870    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:08.500509    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.501151    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504004    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504546    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.506870    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:08.511208   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:08.511222   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:08.543421   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:08.543450   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:08.604175   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:08.604207   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:08.632557   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:08.632623   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:08.663480   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:08.663506   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:08.675096   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:08.675127   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:08.713968   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:08.713998   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:08.759141   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:08.759176   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:11.351574   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:11.361875   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:11.361972   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:11.388446   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:11.388515   59960 cri.go:89] found id: ""
	I1126 20:11:11.388529   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:11.388594   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.392093   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:11.392176   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:11.421855   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:11.421875   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:11.421880   59960 cri.go:89] found id: ""
	I1126 20:11:11.421887   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:11.421974   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.425675   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.429670   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:11.429770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:11.455248   59960 cri.go:89] found id: ""
	I1126 20:11:11.455272   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.455280   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:11.455287   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:11.455349   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:11.481734   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:11.481755   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:11.481761   59960 cri.go:89] found id: ""
	I1126 20:11:11.481769   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:11.481841   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.485836   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.489303   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:11.489380   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:11.521985   59960 cri.go:89] found id: ""
	I1126 20:11:11.522011   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.522020   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:11.522036   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:11.522095   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:11.561668   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:11.561700   59960 cri.go:89] found id: ""
	I1126 20:11:11.561708   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:11.561772   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.565986   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:11.566063   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:11.594364   59960 cri.go:89] found id: ""
	I1126 20:11:11.594386   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.594395   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:11.594404   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:11.594440   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:11.639020   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:11.639057   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:11.709026   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:11.709063   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:11.739742   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:11.739771   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:11.806014   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:11.797164    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798194    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798970    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.800645    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.801154    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:11.797164    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798194    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798970    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.800645    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.801154    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:11.806036   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:11.806048   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:11.844958   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:11.844991   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:11.876607   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:11.876634   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:11.911651   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:11.911677   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:11.991136   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:11.991170   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:12.094606   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:12.094650   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:12.107579   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:12.107609   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:14.637133   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:14.648286   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:14.648355   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:14.678404   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:14.678427   59960 cri.go:89] found id: ""
	I1126 20:11:14.678435   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:14.678495   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.682257   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:14.682330   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:14.713744   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:14.713765   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:14.713770   59960 cri.go:89] found id: ""
	I1126 20:11:14.713777   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:14.713835   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.718000   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.721792   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:14.721916   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:14.753701   59960 cri.go:89] found id: ""
	I1126 20:11:14.753767   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.753793   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:14.753812   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:14.753951   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:14.782584   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:14.782609   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:14.782615   59960 cri.go:89] found id: ""
	I1126 20:11:14.782622   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:14.782679   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.786288   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.790091   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:14.790165   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:14.816545   59960 cri.go:89] found id: ""
	I1126 20:11:14.816570   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.816579   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:14.816586   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:14.816642   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:14.846080   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:14.846100   59960 cri.go:89] found id: ""
	I1126 20:11:14.846108   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:14.846166   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.849789   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:14.849880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:14.876460   59960 cri.go:89] found id: ""
	I1126 20:11:14.876491   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.876500   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:14.876508   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:14.876518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:14.951236   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:14.951274   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:14.983322   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:14.983350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:15.061107   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:15.051102    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.052170    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.053243    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.054378    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.056334    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:15.051102    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.052170    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.053243    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.054378    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.056334    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:15.061129   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:15.061144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:15.097557   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:15.097587   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:15.138293   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:15.138326   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:15.168503   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:15.168532   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:15.267115   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:15.267150   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:15.279584   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:15.279615   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:15.326150   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:15.326184   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:15.389193   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:15.389226   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:17.918406   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:17.929053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:17.929122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:17.953884   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:17.953945   59960 cri.go:89] found id: ""
	I1126 20:11:17.953954   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:17.954015   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.957395   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:17.957465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:17.983711   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:17.983731   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:17.983735   59960 cri.go:89] found id: ""
	I1126 20:11:17.983742   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:17.983795   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.987660   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.991154   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:17.991224   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:18.019969   59960 cri.go:89] found id: ""
	I1126 20:11:18.019998   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.020008   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:18.020015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:18.020073   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:18.061149   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:18.061172   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:18.061178   59960 cri.go:89] found id: ""
	I1126 20:11:18.061186   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:18.061246   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.065578   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.069815   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:18.069885   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:18.096457   59960 cri.go:89] found id: ""
	I1126 20:11:18.096479   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.096487   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:18.096494   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:18.096554   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:18.124303   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:18.124367   59960 cri.go:89] found id: ""
	I1126 20:11:18.124392   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:18.124471   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.130707   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:18.130839   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:18.156714   59960 cri.go:89] found id: ""
	I1126 20:11:18.156740   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.156750   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:18.156759   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:18.156773   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:18.233800   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:18.233837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:18.264943   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:18.264973   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:18.343435   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:18.335872    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.336444    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.337906    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.338530    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.339816    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:18.335872    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.336444    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.337906    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.338530    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.339816    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:18.343458   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:18.343470   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:18.372998   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:18.373026   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:18.416461   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:18.416495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:18.445233   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:18.445263   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:18.545748   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:18.545787   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:18.557806   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:18.557835   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:18.622509   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:18.622542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:18.707610   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:18.707689   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:21.236452   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:21.247662   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:21.247729   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:21.276004   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:21.276030   59960 cri.go:89] found id: ""
	I1126 20:11:21.276038   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:21.276125   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.279851   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:21.279945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:21.309267   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:21.309291   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:21.309297   59960 cri.go:89] found id: ""
	I1126 20:11:21.309304   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:21.309359   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.313384   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.317026   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:21.317099   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:21.347773   59960 cri.go:89] found id: ""
	I1126 20:11:21.347799   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.347807   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:21.347817   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:21.347901   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:21.389878   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:21.389898   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:21.389902   59960 cri.go:89] found id: ""
	I1126 20:11:21.389910   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:21.390028   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.396218   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.405704   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:21.405823   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:21.458505   59960 cri.go:89] found id: ""
	I1126 20:11:21.458573   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.458605   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:21.458635   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:21.458731   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:21.486896   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:21.486961   59960 cri.go:89] found id: ""
	I1126 20:11:21.486983   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:21.487052   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.490729   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:21.490845   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:21.521776   59960 cri.go:89] found id: ""
	I1126 20:11:21.521798   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.521806   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:21.521815   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:21.521827   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:21.540126   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:21.540201   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:21.612034   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:21.604355    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.605075    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.606757    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.607410    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.608381    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:21.604355    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.605075    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.606757    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.607410    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.608381    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:21.612058   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:21.612072   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:21.658622   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:21.658657   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:21.707807   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:21.707844   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:21.769271   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:21.769306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:21.801295   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:21.801325   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:21.896605   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:21.896639   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:21.929176   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:21.929205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:21.967857   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:21.967884   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:22.001350   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:22.001375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:24.595423   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:24.606910   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:24.606980   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:24.638795   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:24.638819   59960 cri.go:89] found id: ""
	I1126 20:11:24.638827   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:24.638885   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.642601   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:24.642677   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:24.709965   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:24.709984   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:24.709989   59960 cri.go:89] found id: ""
	I1126 20:11:24.709996   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:24.710075   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.714848   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.719509   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:24.719668   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:24.756426   59960 cri.go:89] found id: ""
	I1126 20:11:24.756497   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.756521   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:24.756540   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:24.756658   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:24.803189   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:24.803256   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:24.803274   59960 cri.go:89] found id: ""
	I1126 20:11:24.803295   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:24.803379   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.808196   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.812071   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:24.812194   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:24.852305   59960 cri.go:89] found id: ""
	I1126 20:11:24.852378   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.852408   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:24.852429   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:24.852520   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:24.889194   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:24.889263   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:24.889294   59960 cri.go:89] found id: ""
	I1126 20:11:24.889320   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:24.889413   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.893347   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.897224   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:24.897334   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:24.930230   59960 cri.go:89] found id: ""
	I1126 20:11:24.930304   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.930333   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:24.930344   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:24.930371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:25.035563   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:25.035604   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:25.054082   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:25.054112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:25.096053   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:25.096081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:25.145970   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:25.146007   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:25.185648   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:25.185678   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:25.214168   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:25.214199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:25.247077   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:25.247106   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:25.338812   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:25.330325    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.331301    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.332972    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.333487    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.335076    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:25.330325    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.331301    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.332972    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.333487    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.335076    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:25.338839   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:25.338854   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:25.379564   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:25.379600   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:25.447694   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:25.447730   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:25.472568   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:25.472598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:28.058550   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:28.076007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:28.076082   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:28.106329   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:28.106351   59960 cri.go:89] found id: ""
	I1126 20:11:28.106360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:28.106418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.110514   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:28.110591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:28.140757   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:28.140777   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:28.140782   59960 cri.go:89] found id: ""
	I1126 20:11:28.140789   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:28.140842   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.144844   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.148401   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:28.148473   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:28.174921   59960 cri.go:89] found id: ""
	I1126 20:11:28.174944   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.174953   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:28.174959   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:28.175022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:28.202405   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:28.202425   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:28.202429   59960 cri.go:89] found id: ""
	I1126 20:11:28.202436   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:28.202491   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.207455   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.211480   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:28.211548   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:28.239676   59960 cri.go:89] found id: ""
	I1126 20:11:28.239749   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.239773   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:28.239793   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:28.239857   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:28.269256   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:28.269277   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:28.269282   59960 cri.go:89] found id: ""
	I1126 20:11:28.269289   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:28.269344   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.273004   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.276329   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:28.276398   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:28.302206   59960 cri.go:89] found id: ""
	I1126 20:11:28.302272   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.302298   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:28.302321   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:28.302363   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:28.332034   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:28.332062   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:28.376567   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:28.376603   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:28.441530   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:28.441568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:28.468188   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:28.468219   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:28.544745   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:28.544780   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:28.590841   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:28.590870   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:28.603163   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:28.603194   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:28.675368   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:28.666467    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.667143    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.668892    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.669848    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.671529    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:28.666467    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.667143    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.668892    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.669848    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.671529    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:28.675390   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:28.675403   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:28.716129   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:28.716160   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:28.746889   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:28.746916   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:28.784649   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:28.784678   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:31.386032   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:31.396663   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:31.396729   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:31.424252   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:31.424274   59960 cri.go:89] found id: ""
	I1126 20:11:31.424282   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:31.424337   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.427909   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:31.427983   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:31.459053   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:31.459075   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:31.459080   59960 cri.go:89] found id: ""
	I1126 20:11:31.459088   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:31.459148   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.462802   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.466564   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:31.466687   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:31.497981   59960 cri.go:89] found id: ""
	I1126 20:11:31.498003   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.498012   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:31.498018   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:31.498110   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:31.526027   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:31.526052   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:31.526057   59960 cri.go:89] found id: ""
	I1126 20:11:31.526065   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:31.526170   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.529987   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.534855   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:31.534945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:31.563109   59960 cri.go:89] found id: ""
	I1126 20:11:31.563169   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.563198   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:31.563219   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:31.563293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:31.589243   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:31.589265   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:31.589270   59960 cri.go:89] found id: ""
	I1126 20:11:31.589278   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:31.589354   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.593459   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.596946   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:31.597021   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:31.623525   59960 cri.go:89] found id: ""
	I1126 20:11:31.623558   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.623567   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:31.623576   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:31.623587   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:31.652294   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:31.652373   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:31.735258   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:31.735294   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:31.768608   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:31.768683   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:31.870428   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:31.870508   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:31.897014   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:31.897042   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:32.001263   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:32.001299   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:32.038474   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:32.038514   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:32.052890   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:32.052925   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:32.157895   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:32.150135    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.150798    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152292    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152811    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.154388    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:32.150135    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.150798    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152292    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152811    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.154388    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:32.157991   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:32.158015   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:32.202276   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:32.202312   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:32.246886   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:32.246920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:34.774920   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:34.785509   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:34.785619   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:34.817587   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:34.817656   59960 cri.go:89] found id: ""
	I1126 20:11:34.817682   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:34.817753   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.821524   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:34.821594   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:34.849130   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:34.849154   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:34.849159   59960 cri.go:89] found id: ""
	I1126 20:11:34.849167   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:34.849233   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.852945   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.856601   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:34.856684   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:34.883375   59960 cri.go:89] found id: ""
	I1126 20:11:34.883398   59960 logs.go:282] 0 containers: []
	W1126 20:11:34.883412   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:34.883450   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:34.883524   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:34.909798   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:34.909821   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:34.909826   59960 cri.go:89] found id: ""
	I1126 20:11:34.909834   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:34.909888   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.913552   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.916964   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:34.917033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:34.949567   59960 cri.go:89] found id: ""
	I1126 20:11:34.949592   59960 logs.go:282] 0 containers: []
	W1126 20:11:34.949601   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:34.949608   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:34.949663   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:34.977128   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:34.977150   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:34.977156   59960 cri.go:89] found id: ""
	I1126 20:11:34.977163   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:34.977220   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.981001   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.984842   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:34.984957   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:35.012427   59960 cri.go:89] found id: ""
	I1126 20:11:35.012460   59960 logs.go:282] 0 containers: []
	W1126 20:11:35.012470   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:35.012479   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:35.012493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:35.040355   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:35.040396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:35.085028   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:35.085064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:35.113614   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:35.113649   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:35.153880   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:35.153911   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:35.198643   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:35.198675   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:35.268315   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:35.268350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:35.295776   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:35.295804   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:35.376804   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:35.376847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:35.482429   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:35.482467   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:35.495585   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:35.495620   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:35.570301   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:35.562818    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.563633    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565195    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565472    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.566934    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:35.562818    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.563633    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565195    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565472    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.566934    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:35.570323   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:35.570336   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.104089   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:38.117181   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:38.117256   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:38.149986   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:38.150007   59960 cri.go:89] found id: ""
	I1126 20:11:38.150015   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:38.150071   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.153769   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:38.153836   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:38.181424   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:38.181445   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:38.181450   59960 cri.go:89] found id: ""
	I1126 20:11:38.181457   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:38.181514   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.186065   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.189965   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:38.190088   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:38.222377   59960 cri.go:89] found id: ""
	I1126 20:11:38.222403   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.222412   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:38.222418   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:38.222512   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:38.251289   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:38.251308   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:38.251312   59960 cri.go:89] found id: ""
	I1126 20:11:38.251319   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:38.251376   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.256455   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.260117   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:38.260191   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:38.285970   59960 cri.go:89] found id: ""
	I1126 20:11:38.285993   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.286001   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:38.286007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:38.286071   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:38.316333   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.316352   59960 cri.go:89] found id: ""
	I1126 20:11:38.316360   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:38.316418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.320056   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:38.320141   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:38.346321   59960 cri.go:89] found id: ""
	I1126 20:11:38.346343   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.346355   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:38.346365   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:38.346377   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:38.373397   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:38.373424   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:38.425362   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:38.425395   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.453015   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:38.453091   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:38.532623   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:38.532697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:38.633361   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:38.633397   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:38.645846   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:38.645873   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:38.703411   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:38.703444   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:38.767512   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:38.767547   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:38.796976   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:38.797004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:38.829009   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:38.829038   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:38.898466   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:38.890004    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.890695    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892444    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892921    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.894201    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:38.890004    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.890695    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892444    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892921    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.894201    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:41.398722   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:41.410132   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:41.410201   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:41.438116   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:41.438139   59960 cri.go:89] found id: ""
	I1126 20:11:41.438148   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:41.438205   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.442017   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:41.442090   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:41.469903   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:41.469958   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:41.469963   59960 cri.go:89] found id: ""
	I1126 20:11:41.469970   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:41.470027   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.474067   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.478045   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:41.478121   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:41.505356   59960 cri.go:89] found id: ""
	I1126 20:11:41.505421   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.505446   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:41.505473   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:41.505547   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:41.539013   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:41.539078   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:41.539097   59960 cri.go:89] found id: ""
	I1126 20:11:41.539120   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:41.539192   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.545082   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.548706   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:41.548780   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:41.575834   59960 cri.go:89] found id: ""
	I1126 20:11:41.575859   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.575867   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:41.575874   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:41.575934   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:41.611347   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:41.611373   59960 cri.go:89] found id: ""
	I1126 20:11:41.611381   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:41.611452   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.615789   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:41.615865   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:41.641022   59960 cri.go:89] found id: ""
	I1126 20:11:41.641047   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.641057   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:41.641066   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:41.641078   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:41.742347   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:41.742381   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:41.754134   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:41.754164   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:41.831601   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:41.821574    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.822287    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.823756    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.824699    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.826433    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:41.821574    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.822287    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.823756    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.824699    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.826433    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:41.831624   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:41.831637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:41.860096   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:41.860125   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:41.910250   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:41.910285   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:41.980123   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:41.980161   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:42.010802   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:42.010829   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:42.106028   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:42.106070   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:42.164514   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:42.164559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:42.271103   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:42.271151   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:44.839838   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:44.850546   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:44.850618   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:44.876918   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:44.876988   59960 cri.go:89] found id: ""
	I1126 20:11:44.877011   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:44.877094   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.881043   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:44.881125   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:44.911219   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:44.911239   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:44.911243   59960 cri.go:89] found id: ""
	I1126 20:11:44.911250   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:44.911304   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.914984   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.918517   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:44.918591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:44.948367   59960 cri.go:89] found id: ""
	I1126 20:11:44.948393   59960 logs.go:282] 0 containers: []
	W1126 20:11:44.948403   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:44.948410   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:44.948488   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:44.979725   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:44.979749   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:44.979762   59960 cri.go:89] found id: ""
	I1126 20:11:44.979770   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:44.979825   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.983672   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.987318   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:44.987393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:45.013302   59960 cri.go:89] found id: ""
	I1126 20:11:45.013326   59960 logs.go:282] 0 containers: []
	W1126 20:11:45.013335   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:45.013342   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:45.013400   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:45.055627   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:45.055649   59960 cri.go:89] found id: ""
	I1126 20:11:45.055657   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:45.055726   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:45.085558   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:45.085645   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:45.151023   59960 cri.go:89] found id: ""
	I1126 20:11:45.151097   59960 logs.go:282] 0 containers: []
	W1126 20:11:45.151125   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:45.151149   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:45.151189   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:45.299197   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:45.299495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:45.414522   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:45.414561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:45.426305   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:45.426334   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:45.498361   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:45.490138    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.490855    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.492369    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.493032    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.494581    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:45.490138    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.490855    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.492369    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.493032    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.494581    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:45.498385   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:45.498406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:45.544282   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:45.544315   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:45.572601   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:45.572628   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:45.618675   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:45.618704   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:45.644699   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:45.644729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:45.692766   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:45.692847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:45.768264   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:45.768298   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.298071   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:48.309786   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:48.309955   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:48.338906   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:48.338929   59960 cri.go:89] found id: ""
	I1126 20:11:48.338938   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:48.339013   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.342703   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:48.342807   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:48.373459   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:48.373483   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:48.373489   59960 cri.go:89] found id: ""
	I1126 20:11:48.373497   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:48.373571   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.377243   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.380907   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:48.380978   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:48.410171   59960 cri.go:89] found id: ""
	I1126 20:11:48.410194   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.410203   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:48.410210   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:48.410269   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:48.438118   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:48.438141   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.438146   59960 cri.go:89] found id: ""
	I1126 20:11:48.438153   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:48.438208   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.441706   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.445239   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:48.445331   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:48.471795   59960 cri.go:89] found id: ""
	I1126 20:11:48.471818   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.471827   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:48.471834   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:48.471894   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:48.499373   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:48.499444   59960 cri.go:89] found id: ""
	I1126 20:11:48.499459   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:48.499520   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.503413   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:48.503486   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:48.530399   59960 cri.go:89] found id: ""
	I1126 20:11:48.530421   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.530435   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:48.530450   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:48.530464   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:48.571849   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:48.571882   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:48.658179   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:48.658279   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:48.689018   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:48.689045   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:48.763174   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:48.763207   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:48.778567   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:48.778596   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:48.827328   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:48.827365   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.857288   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:48.857365   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:48.888507   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:48.888539   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:48.988930   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:48.988967   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:49.069225   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:49.055449    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.056233    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.057886    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.058530    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.060083    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:49.055449    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.056233    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.057886    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.058530    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.060083    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:49.069248   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:49.069262   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:51.595258   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:51.606745   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:51.606819   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:51.636395   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:51.636416   59960 cri.go:89] found id: ""
	I1126 20:11:51.636430   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:51.636488   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.640040   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:51.640115   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:51.676792   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:51.676812   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:51.676816   59960 cri.go:89] found id: ""
	I1126 20:11:51.676824   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:51.676877   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.681110   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.685068   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:51.685183   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:51.720013   59960 cri.go:89] found id: ""
	I1126 20:11:51.720038   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.720047   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:51.720054   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:51.720111   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:51.748336   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:51.748360   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:51.748375   59960 cri.go:89] found id: ""
	I1126 20:11:51.748383   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:51.748439   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.752267   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.756170   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:51.756241   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:51.783057   59960 cri.go:89] found id: ""
	I1126 20:11:51.783086   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.783095   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:51.783101   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:51.783163   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:51.811250   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:51.811272   59960 cri.go:89] found id: ""
	I1126 20:11:51.811282   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:51.811338   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.815120   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:51.815232   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:51.846026   59960 cri.go:89] found id: ""
	I1126 20:11:51.846049   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.846064   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:51.846074   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:51.846086   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:51.890348   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:51.890380   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:51.920851   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:51.920922   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:51.977107   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:51.977140   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:52.060932   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:52.060981   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:52.093050   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:52.093078   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:52.176431   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:52.176468   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:52.215980   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:52.216012   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:52.327858   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:52.327901   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:52.340252   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:52.340285   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:52.418993   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:52.410090    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.410776    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.412508    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.413095    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.414685    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:52.410090    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.410776    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.412508    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.413095    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.414685    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:52.419016   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:52.419029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:54.944539   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:54.955542   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:54.955615   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:54.986048   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:54.986074   59960 cri.go:89] found id: ""
	I1126 20:11:54.986083   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:54.986139   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:54.989757   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:54.989829   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:55.016053   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:55.016085   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:55.016091   59960 cri.go:89] found id: ""
	I1126 20:11:55.016099   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:55.016174   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.019787   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.023250   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:55.023321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:55.069450   59960 cri.go:89] found id: ""
	I1126 20:11:55.069473   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.069482   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:55.069489   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:55.069572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:55.098641   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:55.098664   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:55.098669   59960 cri.go:89] found id: ""
	I1126 20:11:55.098676   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:55.098732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.102435   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.106227   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:55.106351   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:55.138121   59960 cri.go:89] found id: ""
	I1126 20:11:55.138145   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.138154   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:55.138174   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:55.138236   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:55.167513   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:55.167544   59960 cri.go:89] found id: ""
	I1126 20:11:55.167553   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:55.167618   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.171313   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:55.171381   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:55.202786   59960 cri.go:89] found id: ""
	I1126 20:11:55.202813   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.202822   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:55.202832   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:55.202866   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:55.302444   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:55.302521   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:55.340281   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:55.340307   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:55.380642   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:55.380671   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:55.413529   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:55.413559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:55.441562   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:55.441590   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:55.518521   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:55.518561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:55.558444   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:55.558478   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:55.571280   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:55.571312   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:55.640808   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:55.631279    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.631827    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.633724    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.634294    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.636622    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:55.631279    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.631827    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.633724    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.634294    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.636622    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:55.640840   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:55.640855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:55.687489   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:55.687525   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.274871   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:58.285429   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:58.285499   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:58.313375   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:58.313399   59960 cri.go:89] found id: ""
	I1126 20:11:58.313406   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:58.313459   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.316973   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:58.317046   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:58.343195   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:58.343222   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:58.343233   59960 cri.go:89] found id: ""
	I1126 20:11:58.343241   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:58.343299   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.346903   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.350464   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:58.350532   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:58.389630   59960 cri.go:89] found id: ""
	I1126 20:11:58.389651   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.389659   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:58.389666   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:58.389727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:58.417327   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.417347   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:58.417351   59960 cri.go:89] found id: ""
	I1126 20:11:58.417358   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:58.417415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.421999   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.425800   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:58.425864   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:58.452945   59960 cri.go:89] found id: ""
	I1126 20:11:58.452969   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.452977   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:58.452983   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:58.453043   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:58.488167   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:58.488198   59960 cri.go:89] found id: ""
	I1126 20:11:58.488207   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:58.488290   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.492158   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:58.492254   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:58.519792   59960 cri.go:89] found id: ""
	I1126 20:11:58.519815   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.519824   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:58.519833   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:58.519845   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:58.539152   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:58.539178   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:58.611844   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:58.602656    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.604433    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.605264    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.606165    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.607783    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:58.602656    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.604433    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.605264    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.606165    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.607783    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:58.611916   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:58.611936   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:58.653684   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:58.653755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:58.701629   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:58.701698   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:58.797678   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:58.797712   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:58.826943   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:58.826971   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:58.870347   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:58.870382   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.935086   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:58.935124   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:58.968825   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:58.968856   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:58.997914   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:58.998030   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:01.577720   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:01.589568   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:01.589642   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:01.621435   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:01.621457   59960 cri.go:89] found id: ""
	I1126 20:12:01.621466   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:01.621521   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.625557   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:01.625630   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:01.653424   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:01.653447   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:01.653452   59960 cri.go:89] found id: ""
	I1126 20:12:01.653459   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:01.653520   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.658113   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.663163   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:01.663279   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:01.690617   59960 cri.go:89] found id: ""
	I1126 20:12:01.690692   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.690707   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:01.690714   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:01.690776   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:01.721669   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:01.721691   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:01.721696   59960 cri.go:89] found id: ""
	I1126 20:12:01.721705   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:01.721760   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.725774   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.729528   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:01.729608   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:01.755428   59960 cri.go:89] found id: ""
	I1126 20:12:01.755452   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.755461   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:01.755468   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:01.755529   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:01.783818   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:01.783841   59960 cri.go:89] found id: ""
	I1126 20:12:01.783849   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:01.783905   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.787656   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:01.787726   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:01.815958   59960 cri.go:89] found id: ""
	I1126 20:12:01.816025   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.816050   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:01.816067   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:01.816080   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:01.867560   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:01.867592   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:01.932205   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:01.932256   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:02.002408   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:02.002441   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:02.051577   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:02.051612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:02.088918   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:02.088948   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:02.168080   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:02.158735    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.159253    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162045    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162706    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.164462    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:02.158735    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.159253    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162045    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162706    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.164462    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:02.168105   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:02.168119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:02.244385   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:02.244435   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:02.282263   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:02.282293   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:02.383774   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:02.383810   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:02.399682   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:02.399712   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:04.928429   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:04.939418   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:04.939502   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:04.967318   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:04.967344   59960 cri.go:89] found id: ""
	I1126 20:12:04.967352   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:04.967406   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:04.971172   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:04.971242   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:04.998636   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:04.998660   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:04.998666   59960 cri.go:89] found id: ""
	I1126 20:12:04.998673   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:04.998728   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.002734   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.006234   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:05.006304   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:05.031905   59960 cri.go:89] found id: ""
	I1126 20:12:05.031931   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.031948   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:05.031954   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:05.032022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:05.062024   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:05.062047   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:05.062053   59960 cri.go:89] found id: ""
	I1126 20:12:05.062061   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:05.062119   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.066633   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.070769   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:05.070894   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:05.098088   59960 cri.go:89] found id: ""
	I1126 20:12:05.098113   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.098123   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:05.098130   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:05.098213   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:05.131371   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:05.131394   59960 cri.go:89] found id: ""
	I1126 20:12:05.131403   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:05.131477   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.135270   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:05.135372   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:05.162342   59960 cri.go:89] found id: ""
	I1126 20:12:05.162365   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.162374   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:05.162383   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:05.162395   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:05.235501   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:05.227170    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.227750    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229253    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229720    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.231198    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:05.227170    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.227750    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229253    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229720    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.231198    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:05.235522   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:05.235536   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:05.263102   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:05.263128   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:05.302111   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:05.302144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:05.333187   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:05.333216   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:05.359477   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:05.359505   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:05.438760   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:05.438798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:05.451777   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:05.451807   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:05.498508   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:05.498543   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:05.568808   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:05.568843   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:05.616879   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:05.616909   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:08.220414   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:08.231126   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:08.231199   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:08.258035   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:08.258105   59960 cri.go:89] found id: ""
	I1126 20:12:08.258125   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:08.258192   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.262176   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:08.262249   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:08.289710   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:08.289733   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:08.289739   59960 cri.go:89] found id: ""
	I1126 20:12:08.289750   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:08.289805   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.293485   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.297802   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:08.297880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:08.327209   59960 cri.go:89] found id: ""
	I1126 20:12:08.327234   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.327243   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:08.327263   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:08.327336   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:08.357819   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:08.357840   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:08.357845   59960 cri.go:89] found id: ""
	I1126 20:12:08.357852   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:08.357906   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.361705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.365237   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:08.365328   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:08.394319   59960 cri.go:89] found id: ""
	I1126 20:12:08.394383   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.394399   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:08.394406   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:08.394480   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:08.420463   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:08.420527   59960 cri.go:89] found id: ""
	I1126 20:12:08.420553   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:08.420638   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.424335   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:08.424450   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:08.452961   59960 cri.go:89] found id: ""
	I1126 20:12:08.452986   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.452995   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:08.453003   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:08.453014   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:08.493988   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:08.494022   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:08.544465   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:08.544499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:08.574385   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:08.574413   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:08.586334   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:08.586371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:08.667454   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:08.650997    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.659303    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.660307    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662037    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662374    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:08.650997    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.659303    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.660307    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662037    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662374    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:08.667486   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:08.667499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:08.699349   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:08.699378   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:08.764949   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:08.764985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:08.796757   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:08.796785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:08.880624   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:08.880660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:08.914640   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:08.914667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:11.513808   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:11.524482   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:11.524580   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:11.558859   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:11.558902   59960 cri.go:89] found id: ""
	I1126 20:12:11.558911   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:11.558970   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.562673   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:11.562747   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:11.588932   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:11.588951   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:11.588956   59960 cri.go:89] found id: ""
	I1126 20:12:11.588963   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:11.589017   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.592810   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.596570   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:11.596643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:11.623065   59960 cri.go:89] found id: ""
	I1126 20:12:11.623145   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.623161   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:11.623169   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:11.623229   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:11.650581   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:11.650605   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:11.650610   59960 cri.go:89] found id: ""
	I1126 20:12:11.650618   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:11.650671   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.655559   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.659747   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:11.659817   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:11.687296   59960 cri.go:89] found id: ""
	I1126 20:12:11.687322   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.687331   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:11.687337   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:11.687396   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:11.720511   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:11.720579   59960 cri.go:89] found id: ""
	I1126 20:12:11.720617   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:11.720708   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.724437   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:11.724506   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:11.749548   59960 cri.go:89] found id: ""
	I1126 20:12:11.749582   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.749591   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:11.749601   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:11.749612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:11.844417   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:11.844451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:11.856841   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:11.856870   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:11.927039   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:11.919031    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.919434    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921013    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921770    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.923409    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:11.919031    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.919434    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921013    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921770    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.923409    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:11.927072   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:11.927085   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:11.952749   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:11.952778   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:11.979828   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:11.979854   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:12.054969   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:12.055007   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:12.096829   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:12.096861   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:12.139040   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:12.139073   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:12.188630   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:12.188665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:12.261491   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:12.261525   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:14.793314   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:14.805690   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:14.805792   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:14.834480   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:14.834550   59960 cri.go:89] found id: ""
	I1126 20:12:14.834563   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:14.834624   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.838451   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:14.838546   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:14.865258   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:14.865280   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:14.865288   59960 cri.go:89] found id: ""
	I1126 20:12:14.865296   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:14.865369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.869042   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.872598   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:14.872673   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:14.899453   59960 cri.go:89] found id: ""
	I1126 20:12:14.899475   59960 logs.go:282] 0 containers: []
	W1126 20:12:14.899484   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:14.899491   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:14.899553   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:14.927802   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:14.927830   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:14.927837   59960 cri.go:89] found id: ""
	I1126 20:12:14.927845   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:14.927940   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.932558   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.936133   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:14.936204   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:14.961102   59960 cri.go:89] found id: ""
	I1126 20:12:14.961173   59960 logs.go:282] 0 containers: []
	W1126 20:12:14.961195   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:14.961215   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:14.961302   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:15.002363   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:15.002384   59960 cri.go:89] found id: ""
	I1126 20:12:15.002393   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:15.002447   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:15.006142   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:15.006212   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:15.032134   59960 cri.go:89] found id: ""
	I1126 20:12:15.032199   59960 logs.go:282] 0 containers: []
	W1126 20:12:15.032214   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:15.032224   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:15.032240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:15.081347   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:15.081379   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:15.180623   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:15.180658   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:15.209901   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:15.209962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:15.262607   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:15.262636   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:15.288510   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:15.288544   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:15.367680   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:15.367714   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:15.412204   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:15.412231   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:15.424270   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:15.424300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:15.503073   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:15.494667    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.495283    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.496993    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.497515    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.498972    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:15.494667    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.495283    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.496993    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.497515    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.498972    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:15.503139   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:15.503167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:15.550262   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:15.550296   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.118444   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:18.129864   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:18.129981   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:18.156819   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:18.156838   59960 cri.go:89] found id: ""
	I1126 20:12:18.156846   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:18.156904   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.161071   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:18.161149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:18.189616   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:18.189639   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:18.189644   59960 cri.go:89] found id: ""
	I1126 20:12:18.189651   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:18.189705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.193599   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.197622   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:18.197702   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:18.229000   59960 cri.go:89] found id: ""
	I1126 20:12:18.229024   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.229034   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:18.229041   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:18.229097   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:18.258704   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.258728   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:18.258734   59960 cri.go:89] found id: ""
	I1126 20:12:18.258741   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:18.258799   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.262617   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.266630   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:18.266703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:18.294498   59960 cri.go:89] found id: ""
	I1126 20:12:18.294520   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.294528   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:18.294535   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:18.294592   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:18.321461   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:18.321534   59960 cri.go:89] found id: ""
	I1126 20:12:18.321556   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:18.321645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.325350   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:18.325460   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:18.351492   59960 cri.go:89] found id: ""
	I1126 20:12:18.351553   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.351579   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:18.351599   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:18.351637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:18.407171   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:18.407205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:18.439080   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:18.439112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:18.547958   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:18.547995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:18.619721   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:18.609846    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.610654    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612119    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612768    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.614366    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:18.609846    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.610654    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612119    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612768    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.614366    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:18.619742   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:18.619754   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:18.645098   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:18.645177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:18.682606   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:18.682639   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.763422   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:18.763453   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:18.795735   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:18.795762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:18.822004   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:18.822035   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:18.896691   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:18.896727   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:21.410083   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:21.420840   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:21.420938   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:21.446994   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:21.447016   59960 cri.go:89] found id: ""
	I1126 20:12:21.447024   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:21.447102   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.450650   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:21.450721   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:21.479530   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:21.479554   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:21.479559   59960 cri.go:89] found id: ""
	I1126 20:12:21.479566   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:21.479639   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.483856   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.487301   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:21.487396   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:21.514632   59960 cri.go:89] found id: ""
	I1126 20:12:21.514655   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.514664   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:21.514677   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:21.514734   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:21.552676   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:21.552697   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:21.552701   59960 cri.go:89] found id: ""
	I1126 20:12:21.552708   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:21.552764   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.558562   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.562503   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:21.562570   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:21.592027   59960 cri.go:89] found id: ""
	I1126 20:12:21.592051   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.592059   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:21.592065   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:21.592122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:21.622050   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:21.622069   59960 cri.go:89] found id: ""
	I1126 20:12:21.622078   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:21.622133   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.625979   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:21.626057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:21.659506   59960 cri.go:89] found id: ""
	I1126 20:12:21.659530   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.659539   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:21.659548   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:21.659561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:21.692379   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:21.692406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:21.765021   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:21.765055   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:21.839116   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:21.830975    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.831759    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833349    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833904    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.835476    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:21.830975    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.831759    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833349    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833904    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.835476    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:21.839140   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:21.839153   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:21.865386   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:21.865413   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:21.904223   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:21.904257   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:21.949513   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:21.949545   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:21.975811   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:21.975838   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:22.009804   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:22.009830   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:22.114067   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:22.114107   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:22.129823   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:22.129850   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:24.699777   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:24.710717   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:24.710835   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:24.737361   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:24.737395   59960 cri.go:89] found id: ""
	I1126 20:12:24.737404   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:24.737467   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.741100   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:24.741181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:24.766942   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:24.767005   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:24.767023   59960 cri.go:89] found id: ""
	I1126 20:12:24.767038   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:24.767117   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.771423   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.775599   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:24.775679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:24.807211   59960 cri.go:89] found id: ""
	I1126 20:12:24.807238   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.807247   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:24.807254   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:24.807313   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:24.839448   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:24.839474   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:24.839480   59960 cri.go:89] found id: ""
	I1126 20:12:24.839487   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:24.839543   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.843345   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.846785   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:24.846859   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:24.875974   59960 cri.go:89] found id: ""
	I1126 20:12:24.875999   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.876008   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:24.876015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:24.876074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:24.904623   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:24.904646   59960 cri.go:89] found id: ""
	I1126 20:12:24.904655   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:24.904729   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.908536   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:24.908626   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:24.937367   59960 cri.go:89] found id: ""
	I1126 20:12:24.937448   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.937471   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:24.937494   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:24.937534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:24.976827   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:24.976864   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:25.024594   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:25.024629   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:25.103663   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:25.103701   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:25.184899   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:25.184934   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:25.288663   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:25.288696   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:25.303312   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:25.303340   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:25.371319   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:25.361818    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.362509    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.364256    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.365013    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.366870    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:25.361818    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.362509    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.364256    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.365013    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.366870    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:25.371342   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:25.371357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:25.399886   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:25.399954   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:25.431130   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:25.431162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:25.457679   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:25.457758   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:27.990400   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:28.001290   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:28.001359   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:28.027402   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:28.027424   59960 cri.go:89] found id: ""
	I1126 20:12:28.027441   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:28.027501   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.030992   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:28.031083   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:28.072993   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:28.073014   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:28.073019   59960 cri.go:89] found id: ""
	I1126 20:12:28.073026   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:28.073084   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.076846   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.080628   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:28.080762   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:28.107876   59960 cri.go:89] found id: ""
	I1126 20:12:28.107902   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.107911   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:28.107918   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:28.107993   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:28.135277   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:28.135299   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:28.135305   59960 cri.go:89] found id: ""
	I1126 20:12:28.135312   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:28.135369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.139340   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.143115   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:28.143193   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:28.179129   59960 cri.go:89] found id: ""
	I1126 20:12:28.179230   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.179259   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:28.179273   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:28.179346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:28.208432   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:28.208453   59960 cri.go:89] found id: ""
	I1126 20:12:28.208465   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:28.208523   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.212104   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:28.212174   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:28.239214   59960 cri.go:89] found id: ""
	I1126 20:12:28.239290   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.239307   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:28.239317   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:28.239331   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:28.311306   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:28.311342   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:28.340943   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:28.340972   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:28.376088   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:28.376113   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:28.447578   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:28.440425    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.440837    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442342    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442644    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.444078    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:28.440425    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.440837    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442342    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442644    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.444078    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:28.447601   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:28.447613   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:28.494672   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:28.494707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:28.524817   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:28.524847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:28.611534   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:28.611568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:28.717586   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:28.717621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:28.729869   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:28.729894   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:28.755777   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:28.755805   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.304943   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:31.316121   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:31.316189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:31.344914   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:31.344936   59960 cri.go:89] found id: ""
	I1126 20:12:31.344945   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:31.345000   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.348636   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:31.348708   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:31.376592   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.376614   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:31.376623   59960 cri.go:89] found id: ""
	I1126 20:12:31.376630   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:31.376683   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.380757   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.384468   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:31.384545   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:31.415544   59960 cri.go:89] found id: ""
	I1126 20:12:31.415570   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.415579   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:31.415586   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:31.415646   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:31.441604   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:31.441680   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:31.441699   59960 cri.go:89] found id: ""
	I1126 20:12:31.441723   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:31.441808   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.445590   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.449159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:31.449233   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:31.475467   59960 cri.go:89] found id: ""
	I1126 20:12:31.475492   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.475501   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:31.475507   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:31.475567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:31.505974   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:31.505995   59960 cri.go:89] found id: ""
	I1126 20:12:31.506004   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:31.506068   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.510913   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:31.510988   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:31.555870   59960 cri.go:89] found id: ""
	I1126 20:12:31.555901   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.555911   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:31.555920   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:31.555932   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:31.569317   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:31.569396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:31.639071   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:31.630335    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.631132    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.632992    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.633425    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.635012    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:31.630335    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.631132    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.632992    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.633425    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.635012    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:31.639141   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:31.639171   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:31.685122   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:31.685156   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:31.715735   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:31.715763   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:31.744469   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:31.744499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.782788   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:31.782822   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:31.854784   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:31.854820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:31.883960   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:31.883989   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:31.968197   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:31.968235   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:32.000618   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:32.000646   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:34.599812   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:34.610580   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:34.610690   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:34.643812   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:34.643835   59960 cri.go:89] found id: ""
	I1126 20:12:34.643844   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:34.643902   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.647819   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:34.647891   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:34.681825   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:34.681849   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:34.681855   59960 cri.go:89] found id: ""
	I1126 20:12:34.681863   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:34.681959   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.685589   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.689208   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:34.689280   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:34.719704   59960 cri.go:89] found id: ""
	I1126 20:12:34.719727   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.719736   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:34.719743   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:34.719802   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:34.745609   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:34.745632   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:34.745639   59960 cri.go:89] found id: ""
	I1126 20:12:34.745646   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:34.745704   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.749369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.752915   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:34.752982   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:34.778956   59960 cri.go:89] found id: ""
	I1126 20:12:34.778982   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.778996   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:34.779003   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:34.779059   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:34.805123   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:34.805146   59960 cri.go:89] found id: ""
	I1126 20:12:34.805153   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:34.805211   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.808760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:34.808834   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:34.834427   59960 cri.go:89] found id: ""
	I1126 20:12:34.834452   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.834462   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:34.834471   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:34.834482   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:34.912760   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:34.912792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:35.015751   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:35.015790   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:35.046216   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:35.046291   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:35.092725   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:35.092760   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:35.163096   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:35.163130   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:35.191405   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:35.191488   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:35.227181   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:35.227213   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:35.240889   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:35.240922   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:35.311849   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:35.302602    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.303934    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.304899    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.306705    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.307280    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:35.302602    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.303934    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.304899    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.306705    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.307280    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:35.311871   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:35.311884   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:35.356916   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:35.356951   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:37.883250   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:37.894052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:37.894122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:37.924918   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:37.924943   59960 cri.go:89] found id: ""
	I1126 20:12:37.924956   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:37.925020   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.928865   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:37.928940   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:37.961907   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:37.961958   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:37.961964   59960 cri.go:89] found id: ""
	I1126 20:12:37.961971   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:37.962035   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.965843   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.969339   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:37.969409   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:37.995343   59960 cri.go:89] found id: ""
	I1126 20:12:37.995373   59960 logs.go:282] 0 containers: []
	W1126 20:12:37.995381   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:37.995388   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:37.995491   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:38.022312   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:38.022334   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:38.022339   59960 cri.go:89] found id: ""
	I1126 20:12:38.022346   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:38.022413   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.026080   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.029533   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:38.029622   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:38.060280   59960 cri.go:89] found id: ""
	I1126 20:12:38.060307   59960 logs.go:282] 0 containers: []
	W1126 20:12:38.060346   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:38.060368   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:38.060437   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:38.091248   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:38.091312   59960 cri.go:89] found id: ""
	I1126 20:12:38.091327   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:38.091425   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.095836   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:38.095914   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:38.125378   59960 cri.go:89] found id: ""
	I1126 20:12:38.125403   59960 logs.go:282] 0 containers: []
	W1126 20:12:38.125413   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:38.125422   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:38.125436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:38.151847   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:38.151875   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:38.202356   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:38.202391   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:38.247650   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:38.247725   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:38.275709   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:38.275736   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:38.307514   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:38.307542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:38.404957   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:38.404994   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:38.491924   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:38.491962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:38.521423   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:38.521460   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:38.598021   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:38.598053   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:38.610973   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:38.611004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:38.687841   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:38.679705   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.680686   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.681793   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.682498   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.684162   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:38.679705   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.680686   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.681793   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.682498   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.684162   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:41.188401   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:41.199011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:41.199080   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:41.227170   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:41.227196   59960 cri.go:89] found id: ""
	I1126 20:12:41.227205   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:41.227260   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.230873   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:41.230945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:41.257484   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:41.257506   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:41.257522   59960 cri.go:89] found id: ""
	I1126 20:12:41.257529   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:41.257584   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.261286   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.265036   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:41.265101   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:41.290579   59960 cri.go:89] found id: ""
	I1126 20:12:41.290645   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.290669   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:41.290682   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:41.290741   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:41.319766   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:41.319786   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:41.319791   59960 cri.go:89] found id: ""
	I1126 20:12:41.319799   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:41.319859   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.323637   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.327077   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:41.327177   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:41.356676   59960 cri.go:89] found id: ""
	I1126 20:12:41.356702   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.356711   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:41.356719   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:41.356783   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:41.385771   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:41.385790   59960 cri.go:89] found id: ""
	I1126 20:12:41.385798   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:41.385852   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.389446   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:41.389544   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:41.416642   59960 cri.go:89] found id: ""
	I1126 20:12:41.416710   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.416732   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:41.416754   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:41.416788   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:41.482246   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:41.473419   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.474136   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.475824   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.476403   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.478152   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:41.473419   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.474136   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.475824   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.476403   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.478152   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:41.482311   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:41.482339   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:41.509950   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:41.510016   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:41.557291   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:41.557324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:41.584211   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:41.584240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:41.666177   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:41.666212   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:41.767334   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:41.767369   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:41.781064   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:41.781089   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:41.825285   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:41.825321   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:41.892538   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:41.892573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:41.920754   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:41.920785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:44.468280   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:44.479465   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:44.479546   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:44.507592   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:44.507615   59960 cri.go:89] found id: ""
	I1126 20:12:44.507623   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:44.507679   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.511422   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:44.511510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:44.543146   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:44.543169   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:44.543174   59960 cri.go:89] found id: ""
	I1126 20:12:44.543181   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:44.543251   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.547022   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.550639   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:44.550719   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:44.579025   59960 cri.go:89] found id: ""
	I1126 20:12:44.579054   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.579063   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:44.579070   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:44.579139   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:44.611309   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:44.611332   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:44.611336   59960 cri.go:89] found id: ""
	I1126 20:12:44.611344   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:44.611407   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.615332   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.619108   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:44.619183   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:44.645161   59960 cri.go:89] found id: ""
	I1126 20:12:44.645185   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.645194   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:44.645201   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:44.645257   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:44.684280   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:44.684301   59960 cri.go:89] found id: ""
	I1126 20:12:44.684310   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:44.684364   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.687985   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:44.688057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:44.713170   59960 cri.go:89] found id: ""
	I1126 20:12:44.713193   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.713202   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:44.713211   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:44.713225   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:44.790764   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:44.782647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.783505   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785179   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785579   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.787022   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:44.782647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.783505   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785179   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785579   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.787022   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:44.790787   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:44.790801   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:44.841911   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:44.842082   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:44.886124   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:44.886155   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:44.956783   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:44.956817   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:44.992805   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:44.992834   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:45.021163   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:45.021190   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:45.060873   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:45.061452   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:45.201027   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:45.201119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:45.266419   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:45.266547   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:45.415986   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:45.416024   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:47.928674   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:47.940771   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:47.940843   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:47.966175   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:47.966194   59960 cri.go:89] found id: ""
	I1126 20:12:47.966202   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:47.966254   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:47.969908   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:47.970011   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:47.997001   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:47.997027   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:47.997032   59960 cri.go:89] found id: ""
	I1126 20:12:47.997040   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:47.997096   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.001757   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.005881   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:48.005980   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:48.031565   59960 cri.go:89] found id: ""
	I1126 20:12:48.031587   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.031595   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:48.031602   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:48.031660   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:48.063357   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:48.063380   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:48.063386   59960 cri.go:89] found id: ""
	I1126 20:12:48.063393   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:48.063450   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.068044   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.073135   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:48.073260   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:48.103364   59960 cri.go:89] found id: ""
	I1126 20:12:48.103391   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.103401   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:48.103408   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:48.103511   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:48.134700   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:48.134720   59960 cri.go:89] found id: ""
	I1126 20:12:48.134728   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:48.134795   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.138489   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:48.138568   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:48.164615   59960 cri.go:89] found id: ""
	I1126 20:12:48.164639   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.164648   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:48.164657   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:48.164670   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:48.238206   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:48.238245   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:48.270325   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:48.270352   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:48.316632   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:48.316660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:48.328526   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:48.328554   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:48.370051   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:48.370081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:48.397236   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:48.397264   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:48.478994   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:48.479029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:48.586134   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:48.586167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:48.661172   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:48.650880   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.652436   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.653061   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.654717   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.655290   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:48.650880   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.652436   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.653061   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.654717   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.655290   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:48.661195   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:48.661211   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:48.689769   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:48.689797   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.235721   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:51.246961   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:51.247038   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:51.276386   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:51.276410   59960 cri.go:89] found id: ""
	I1126 20:12:51.276419   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:51.276472   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.280282   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:51.280363   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:51.307844   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:51.307875   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.307880   59960 cri.go:89] found id: ""
	I1126 20:12:51.307888   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:51.307944   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.311885   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.315516   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:51.315643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:51.343040   59960 cri.go:89] found id: ""
	I1126 20:12:51.343068   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.343077   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:51.343084   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:51.343144   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:51.371879   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:51.371901   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:51.371907   59960 cri.go:89] found id: ""
	I1126 20:12:51.371920   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:51.371976   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.375815   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.379444   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:51.379518   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:51.409590   59960 cri.go:89] found id: ""
	I1126 20:12:51.409615   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.409624   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:51.409630   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:51.409688   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:51.440665   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:51.440692   59960 cri.go:89] found id: ""
	I1126 20:12:51.440701   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:51.440756   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.444486   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:51.444565   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:51.470661   59960 cri.go:89] found id: ""
	I1126 20:12:51.470686   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.470695   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:51.470705   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:51.470749   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:51.482794   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:51.482823   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:51.570460   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:51.561457   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.562296   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.563970   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.564288   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.566409   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:51.561457   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.562296   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.563970   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.564288   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.566409   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:51.570484   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:51.570498   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:51.596696   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:51.596724   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:51.657780   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:51.657820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:51.736300   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:51.736338   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:51.772635   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:51.772664   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:51.808014   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:51.808042   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:51.909775   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:51.909814   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.955849   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:51.955887   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:51.986011   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:51.986040   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:54.569991   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:54.582000   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:54.582074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:54.610486   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:54.610506   59960 cri.go:89] found id: ""
	I1126 20:12:54.610515   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:54.610573   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.614711   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:54.614787   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:54.641548   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:54.641571   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:54.641577   59960 cri.go:89] found id: ""
	I1126 20:12:54.641584   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:54.641645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.645430   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.649375   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:54.649465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:54.677350   59960 cri.go:89] found id: ""
	I1126 20:12:54.677377   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.677386   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:54.677399   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:54.677456   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:54.706226   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:54.706249   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:54.706254   59960 cri.go:89] found id: ""
	I1126 20:12:54.706261   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:54.706315   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.710188   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.713666   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:54.713759   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:54.745132   59960 cri.go:89] found id: ""
	I1126 20:12:54.745158   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.745167   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:54.745174   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:54.745235   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:54.774016   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:54.774039   59960 cri.go:89] found id: ""
	I1126 20:12:54.774047   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:54.774105   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.778220   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:54.778293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:54.807768   59960 cri.go:89] found id: ""
	I1126 20:12:54.807831   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.807845   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:54.807855   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:54.807867   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:54.904620   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:54.904657   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:54.931520   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:54.931548   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:54.974322   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:54.974360   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:55.010146   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:55.010176   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:55.044963   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:55.045006   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:55.060490   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:55.060520   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:55.132694   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:55.124286   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.124937   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.126610   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.127207   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.128929   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:55.124286   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.124937   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.126610   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.127207   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.128929   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:55.132729   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:55.132746   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:55.180103   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:55.180139   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:55.258117   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:55.258154   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:55.289687   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:55.289716   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:57.870076   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:57.881883   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:57.881978   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:57.911809   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:57.911833   59960 cri.go:89] found id: ""
	I1126 20:12:57.911841   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:57.911899   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.915590   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:57.915685   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:57.943647   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:57.943671   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:57.943677   59960 cri.go:89] found id: ""
	I1126 20:12:57.943684   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:57.943747   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.947699   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.951409   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:57.951489   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:57.979114   59960 cri.go:89] found id: ""
	I1126 20:12:57.979138   59960 logs.go:282] 0 containers: []
	W1126 20:12:57.979147   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:57.979154   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:57.979214   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:58.009760   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:58.009781   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:58.009787   59960 cri.go:89] found id: ""
	I1126 20:12:58.009794   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:58.009855   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.013598   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.017135   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:58.017207   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:58.047222   59960 cri.go:89] found id: ""
	I1126 20:12:58.047247   59960 logs.go:282] 0 containers: []
	W1126 20:12:58.047255   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:58.047262   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:58.047324   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:58.094431   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:58.094510   59960 cri.go:89] found id: ""
	I1126 20:12:58.094524   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:58.094586   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.099004   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:58.099099   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:58.126698   59960 cri.go:89] found id: ""
	I1126 20:12:58.126727   59960 logs.go:282] 0 containers: []
	W1126 20:12:58.126735   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:58.126744   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:58.126756   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:58.155602   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:58.155629   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:58.196131   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:58.196166   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:58.243760   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:58.243793   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:58.314546   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:58.314583   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:58.347422   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:58.347451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:58.373247   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:58.373277   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:58.448488   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:58.448524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:58.480586   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:58.480615   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:58.586743   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:58.586799   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:58.600003   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:58.600029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:58.682648   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:58.673481   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.674315   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.675021   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.676838   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.677737   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:58.673481   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.674315   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.675021   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.676838   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.677737   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:01.183502   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:01.195046   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:01.195153   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:01.224257   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:01.224281   59960 cri.go:89] found id: ""
	I1126 20:13:01.224289   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:01.224365   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.228134   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:01.228206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:01.265990   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:01.266014   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:01.266019   59960 cri.go:89] found id: ""
	I1126 20:13:01.266027   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:01.266084   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.270682   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.274505   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:01.274580   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:01.302962   59960 cri.go:89] found id: ""
	I1126 20:13:01.302989   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.302998   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:01.303005   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:01.303072   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:01.335599   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:01.335621   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:01.335627   59960 cri.go:89] found id: ""
	I1126 20:13:01.335635   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:01.335689   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.339621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.343531   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:01.343614   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:01.369553   59960 cri.go:89] found id: ""
	I1126 20:13:01.369578   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.369588   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:01.369594   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:01.369657   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:01.402170   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:01.402197   59960 cri.go:89] found id: ""
	I1126 20:13:01.402205   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:01.402266   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.406260   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:01.406336   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:01.432250   59960 cri.go:89] found id: ""
	I1126 20:13:01.432326   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.432352   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:01.432362   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:01.432378   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:01.473457   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:01.473491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:01.525391   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:01.525445   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:01.557734   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:01.557765   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:01.650427   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:01.650465   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:01.696040   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:01.696070   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:01.801258   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:01.801297   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:01.872498   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:01.872534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:01.912672   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:01.912725   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:01.927976   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:01.928008   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:02.002577   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:01.992139   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.993221   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.994589   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996153   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996915   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:01.992139   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.993221   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.994589   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996153   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996915   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:02.002601   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:02.002614   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:04.532051   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:04.544501   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:04.544572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:04.571414   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:04.571435   59960 cri.go:89] found id: ""
	I1126 20:13:04.571443   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:04.571494   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.575072   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:04.575149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:04.603292   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:04.603312   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:04.603316   59960 cri.go:89] found id: ""
	I1126 20:13:04.603326   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:04.603378   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.607479   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.610889   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:04.610970   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:04.636626   59960 cri.go:89] found id: ""
	I1126 20:13:04.636652   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.636662   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:04.636668   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:04.636745   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:04.665487   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:04.665511   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:04.665516   59960 cri.go:89] found id: ""
	I1126 20:13:04.665523   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:04.665599   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.669516   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.673155   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:04.673221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:04.705848   59960 cri.go:89] found id: ""
	I1126 20:13:04.705873   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.705882   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:04.705888   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:04.705971   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:04.741254   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:04.741277   59960 cri.go:89] found id: ""
	I1126 20:13:04.741285   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:04.741340   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.745396   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:04.745469   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:04.777680   59960 cri.go:89] found id: ""
	I1126 20:13:04.777713   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.777723   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:04.777732   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:04.777744   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:04.884972   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:04.885008   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:04.898040   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:04.898066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:04.971530   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:04.971610   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:05.003493   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:05.003573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:05.082481   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:05.082515   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:05.116089   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:05.116119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:05.186979   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:05.178888   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.179664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181297   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.183205   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:05.178888   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.179664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181297   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.183205   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:05.187006   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:05.187020   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:05.214669   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:05.214698   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:05.261207   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:05.261238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:05.306449   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:05.306482   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:07.838042   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:07.850498   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:07.850567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:07.878108   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:07.878130   59960 cri.go:89] found id: ""
	I1126 20:13:07.878138   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:07.878197   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.882580   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:07.882654   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:07.911855   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:07.911886   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:07.911891   59960 cri.go:89] found id: ""
	I1126 20:13:07.911899   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:07.911960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.915705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.919300   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:07.919371   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:07.951018   59960 cri.go:89] found id: ""
	I1126 20:13:07.951044   59960 logs.go:282] 0 containers: []
	W1126 20:13:07.951053   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:07.951059   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:07.951119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:07.978929   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:07.978951   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:07.978956   59960 cri.go:89] found id: ""
	I1126 20:13:07.978963   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:07.979017   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.983189   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.986830   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:07.986903   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:08.016199   59960 cri.go:89] found id: ""
	I1126 20:13:08.016231   59960 logs.go:282] 0 containers: []
	W1126 20:13:08.016240   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:08.016251   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:08.016325   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:08.053456   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:08.053528   59960 cri.go:89] found id: ""
	I1126 20:13:08.053549   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:08.053644   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:08.057986   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:08.058066   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:08.087479   59960 cri.go:89] found id: ""
	I1126 20:13:08.087508   59960 logs.go:282] 0 containers: []
	W1126 20:13:08.087517   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:08.087533   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:08.087546   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:08.132468   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:08.132502   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:08.176740   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:08.176778   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:08.250131   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:08.250178   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:08.280307   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:08.280337   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:08.310477   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:08.310506   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:08.413610   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:08.413648   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:08.484512   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:08.474848   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.476074   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.477530   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.478182   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.479748   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:08.474848   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.476074   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.477530   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.478182   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.479748   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:08.484538   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:08.484551   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:08.561138   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:08.561172   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:08.596362   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:08.596439   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:08.609838   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:08.609909   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.136633   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:11.147922   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:11.148007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:11.179880   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.179915   59960 cri.go:89] found id: ""
	I1126 20:13:11.179923   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:11.180040   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.184887   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:11.184958   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:11.213848   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:11.213872   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:11.213878   59960 cri.go:89] found id: ""
	I1126 20:13:11.213885   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:11.213981   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.217804   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.221572   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:11.221649   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:11.258706   59960 cri.go:89] found id: ""
	I1126 20:13:11.258783   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.258799   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:11.258806   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:11.258880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:11.289663   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:11.289686   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:11.289692   59960 cri.go:89] found id: ""
	I1126 20:13:11.289699   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:11.289755   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.293522   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.298425   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:11.298504   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:11.325442   59960 cri.go:89] found id: ""
	I1126 20:13:11.325508   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.325534   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:11.325552   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:11.325636   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:11.352745   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:11.352808   59960 cri.go:89] found id: ""
	I1126 20:13:11.352834   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:11.352923   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.356710   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:11.356824   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:11.384378   59960 cri.go:89] found id: ""
	I1126 20:13:11.384402   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.384412   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:11.384421   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:11.384433   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:11.396869   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:11.396938   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:11.467278   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:11.459180   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.459948   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.461472   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.462000   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.463589   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:11.459180   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.459948   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.461472   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.462000   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.463589   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:11.467302   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:11.467316   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.494598   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:11.494626   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:11.533337   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:11.533372   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:11.559364   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:11.559392   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:11.642834   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:11.642873   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:11.680367   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:11.680393   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:11.784039   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:11.784075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:11.834225   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:11.834260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:11.905094   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:11.905129   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.439226   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:14.451155   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:14.451245   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:14.493752   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:14.493776   59960 cri.go:89] found id: ""
	I1126 20:13:14.493784   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:14.493840   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.497504   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:14.497627   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:14.524624   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:14.524646   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:14.524652   59960 cri.go:89] found id: ""
	I1126 20:13:14.524659   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:14.524743   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.528418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.532417   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:14.532512   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:14.559402   59960 cri.go:89] found id: ""
	I1126 20:13:14.559477   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.559491   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:14.559498   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:14.559556   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:14.588825   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:14.588848   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.588853   59960 cri.go:89] found id: ""
	I1126 20:13:14.588860   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:14.588921   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.593022   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.596763   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:14.596831   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:14.624835   59960 cri.go:89] found id: ""
	I1126 20:13:14.624858   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.624867   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:14.624874   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:14.624929   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:14.650771   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:14.650846   59960 cri.go:89] found id: ""
	I1126 20:13:14.650872   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:14.650960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.656095   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:14.656219   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:14.682420   59960 cri.go:89] found id: ""
	I1126 20:13:14.682493   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.682517   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:14.682540   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:14.682581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:14.722936   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:14.722971   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.754105   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:14.754134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:14.786128   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:14.786156   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:14.798341   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:14.798370   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:14.873270   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:14.865757   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.866349   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.867866   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.868348   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.869793   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:14.865757   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.866349   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.867866   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.868348   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.869793   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:14.873292   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:14.873306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:14.920206   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:14.920240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:14.996591   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:14.996624   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:15.024423   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:15.024451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:15.105848   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:15.105881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:15.205091   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:15.205170   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:17.734682   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:17.745326   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:17.745391   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:17.773503   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:17.773525   59960 cri.go:89] found id: ""
	I1126 20:13:17.773534   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:17.773621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.777326   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:17.777400   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:17.805117   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:17.805139   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:17.805144   59960 cri.go:89] found id: ""
	I1126 20:13:17.805151   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:17.805206   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.809065   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.812530   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:17.812601   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:17.841430   59960 cri.go:89] found id: ""
	I1126 20:13:17.841456   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.841465   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:17.841472   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:17.841530   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:17.868985   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:17.869009   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:17.869014   59960 cri.go:89] found id: ""
	I1126 20:13:17.869024   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:17.869081   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.882183   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.885701   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:17.885794   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:17.918849   59960 cri.go:89] found id: ""
	I1126 20:13:17.918872   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.918880   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:17.918887   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:17.918947   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:17.949773   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:17.949849   59960 cri.go:89] found id: ""
	I1126 20:13:17.949872   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:17.949996   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.953636   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:17.953705   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:17.980243   59960 cri.go:89] found id: ""
	I1126 20:13:17.980266   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.980275   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:17.980284   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:17.980295   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:18.011301   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:18.011331   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:18.038493   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:18.038526   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:18.080613   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:18.080641   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:18.160950   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:18.160988   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:18.262170   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:18.262215   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:18.275569   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:18.275593   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:18.351781   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:18.343534   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.344057   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.345769   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.346381   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.347931   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:18.343534   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.344057   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.345769   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.346381   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.347931   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:18.351805   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:18.351817   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:18.389344   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:18.389375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:18.434916   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:18.434949   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:18.527668   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:18.527702   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.058771   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:21.073274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:21.073339   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:21.121326   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:21.121345   59960 cri.go:89] found id: ""
	I1126 20:13:21.121356   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:21.121415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.130434   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:21.130507   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:21.164100   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:21.164161   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:21.164191   59960 cri.go:89] found id: ""
	I1126 20:13:21.164212   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:21.164289   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.168566   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.173217   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:21.173328   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:21.201882   59960 cri.go:89] found id: ""
	I1126 20:13:21.202006   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.202036   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:21.202055   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:21.202157   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:21.230033   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:21.230099   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:21.230120   59960 cri.go:89] found id: ""
	I1126 20:13:21.230144   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:21.230222   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.234188   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.238625   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:21.238709   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:21.266450   59960 cri.go:89] found id: ""
	I1126 20:13:21.266476   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.266485   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:21.266492   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:21.266567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:21.293192   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.293221   59960 cri.go:89] found id: ""
	I1126 20:13:21.293229   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:21.293320   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.297074   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:21.297146   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:21.325608   59960 cri.go:89] found id: ""
	I1126 20:13:21.325635   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.325644   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:21.325653   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:21.325665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:21.365168   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:21.365201   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:21.407809   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:21.407841   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:21.490502   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:21.490538   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:21.593562   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:21.593598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:21.620251   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:21.620280   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:21.696224   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:21.696260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:21.724295   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:21.724324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.754121   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:21.754146   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:21.785320   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:21.785347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:21.797528   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:21.797556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:21.871066   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:21.862248   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.863127   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.864832   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.865449   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.867089   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:21.862248   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.863127   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.864832   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.865449   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.867089   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:24.371542   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:24.382011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:24.382074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:24.413323   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:24.413351   59960 cri.go:89] found id: ""
	I1126 20:13:24.413360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:24.413418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.417248   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:24.417327   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:24.443549   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:24.443571   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:24.443576   59960 cri.go:89] found id: ""
	I1126 20:13:24.443583   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:24.443638   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.447448   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.450865   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:24.450933   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:24.481019   59960 cri.go:89] found id: ""
	I1126 20:13:24.481043   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.481052   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:24.481059   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:24.481119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:24.509327   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:24.509349   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:24.509354   59960 cri.go:89] found id: ""
	I1126 20:13:24.509361   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:24.509416   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.512867   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.516116   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:24.516181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:24.546284   59960 cri.go:89] found id: ""
	I1126 20:13:24.546361   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.546390   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:24.546405   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:24.546464   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:24.571968   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:24.572032   59960 cri.go:89] found id: ""
	I1126 20:13:24.572047   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:24.572113   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.575760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:24.575830   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:24.603299   59960 cri.go:89] found id: ""
	I1126 20:13:24.603325   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.603334   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:24.603373   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:24.603390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:24.642562   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:24.642595   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:24.696607   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:24.696640   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:24.724494   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:24.724523   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:24.805443   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:24.805477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:24.880673   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:24.872137   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.872936   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.874737   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.875329   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.876994   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:24.872137   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.872936   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.874737   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.875329   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.876994   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:24.880694   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:24.880708   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:24.912019   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:24.912047   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:24.998475   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:24.998511   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:25.027058   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:25.027084   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:25.060548   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:25.060577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:25.167756   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:25.167795   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:27.682279   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:27.693116   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:27.693189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:27.720687   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:27.720706   59960 cri.go:89] found id: ""
	I1126 20:13:27.720713   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:27.720765   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.724317   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:27.724388   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:27.751345   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:27.751369   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:27.751375   59960 cri.go:89] found id: ""
	I1126 20:13:27.751384   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:27.751445   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.755313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.758668   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:27.758738   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:27.788496   59960 cri.go:89] found id: ""
	I1126 20:13:27.788567   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.788592   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:27.788611   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:27.788703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:27.815714   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:27.815743   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:27.815749   59960 cri.go:89] found id: ""
	I1126 20:13:27.815757   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:27.815831   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.819360   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.822959   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:27.823038   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:27.853270   59960 cri.go:89] found id: ""
	I1126 20:13:27.853316   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.853326   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:27.853333   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:27.853403   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:27.880677   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:27.880701   59960 cri.go:89] found id: ""
	I1126 20:13:27.880710   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:27.880766   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.884425   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:27.884499   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:27.917060   59960 cri.go:89] found id: ""
	I1126 20:13:27.917126   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.917150   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:27.917183   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:27.917213   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:27.929246   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:27.929321   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:28.005492   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:27.995998   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.996970   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.999116   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.000043   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.001867   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:27.995998   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.996970   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.999116   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.000043   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.001867   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:28.005554   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:28.005581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:28.032388   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:28.032414   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:28.090244   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:28.090279   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:28.140049   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:28.140081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:28.217015   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:28.217052   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:28.252634   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:28.252663   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:28.356298   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:28.356347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:28.391198   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:28.391227   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:28.470669   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:28.470706   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:31.018712   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:31.029520   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:31.029594   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:31.067229   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:31.067249   59960 cri.go:89] found id: ""
	I1126 20:13:31.067257   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:31.067315   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.071728   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:31.071796   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:31.100937   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:31.101015   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:31.101024   59960 cri.go:89] found id: ""
	I1126 20:13:31.101032   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:31.101092   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.106006   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.109883   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:31.110020   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:31.140073   59960 cri.go:89] found id: ""
	I1126 20:13:31.140098   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.140107   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:31.140114   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:31.140177   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:31.170126   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:31.170150   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:31.170155   59960 cri.go:89] found id: ""
	I1126 20:13:31.170163   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:31.170220   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.175522   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.180015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:31.180137   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:31.216744   59960 cri.go:89] found id: ""
	I1126 20:13:31.216771   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.216781   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:31.216787   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:31.216847   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:31.244620   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:31.244653   59960 cri.go:89] found id: ""
	I1126 20:13:31.244661   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:31.244727   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.248677   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:31.248770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:31.275812   59960 cri.go:89] found id: ""
	I1126 20:13:31.275890   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.275914   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:31.275936   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:31.275972   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:31.308954   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:31.308981   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:31.404058   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:31.404140   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:31.449144   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:31.449177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:31.526538   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:31.526575   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:31.613358   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:31.613393   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:31.626272   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:31.626300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:31.701051   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:31.692350   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.693035   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.694572   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.695120   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.696599   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:31.692350   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.693035   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.694572   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.695120   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.696599   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:31.701076   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:31.701089   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:31.726047   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:31.726075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:31.770205   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:31.770246   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:31.800872   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:31.800898   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.331337   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:34.343013   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:34.343079   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:34.369127   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:34.369186   59960 cri.go:89] found id: ""
	I1126 20:13:34.369220   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:34.369305   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.372919   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:34.372984   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:34.400785   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:34.400806   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:34.400811   59960 cri.go:89] found id: ""
	I1126 20:13:34.400818   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:34.400871   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.404967   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.408568   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:34.408648   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:34.434956   59960 cri.go:89] found id: ""
	I1126 20:13:34.434981   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.434990   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:34.434996   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:34.435051   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:34.472918   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:34.472943   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:34.472948   59960 cri.go:89] found id: ""
	I1126 20:13:34.472956   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:34.473009   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.476556   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.480021   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:34.480097   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:34.506491   59960 cri.go:89] found id: ""
	I1126 20:13:34.506513   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.506522   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:34.506528   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:34.506587   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:34.534595   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.534618   59960 cri.go:89] found id: ""
	I1126 20:13:34.534627   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:34.534681   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.542373   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:34.542487   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:34.569404   59960 cri.go:89] found id: ""
	I1126 20:13:34.569439   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.569449   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:34.569473   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:34.569491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:34.594901   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:34.594926   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:34.661252   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:34.661357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:34.736470   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:34.736504   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.767635   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:34.767659   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:34.849541   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:34.849578   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:34.890089   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:34.890122   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:34.918362   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:34.918390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:34.955774   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:34.955800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:35.056965   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:35.057001   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:35.078639   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:35.078668   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:35.151655   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:35.143337   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.143918   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.145438   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.146046   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.147630   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:35.143337   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.143918   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.145438   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.146046   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.147630   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:37.653306   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:37.665236   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:37.665306   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:37.692381   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:37.692404   59960 cri.go:89] found id: ""
	I1126 20:13:37.692420   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:37.692475   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.696411   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:37.696485   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:37.733416   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:37.733447   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:37.733452   59960 cri.go:89] found id: ""
	I1126 20:13:37.733459   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:37.733512   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.737487   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.740759   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:37.740827   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:37.770540   59960 cri.go:89] found id: ""
	I1126 20:13:37.770563   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.770571   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:37.770578   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:37.770645   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:37.798542   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:37.798566   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:37.798572   59960 cri.go:89] found id: ""
	I1126 20:13:37.798579   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:37.798632   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.802507   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.806007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:37.806128   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:37.831752   59960 cri.go:89] found id: ""
	I1126 20:13:37.831780   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.831789   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:37.831796   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:37.831911   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:37.859491   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:37.859516   59960 cri.go:89] found id: ""
	I1126 20:13:37.859526   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:37.859608   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.863305   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:37.863407   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:37.890262   59960 cri.go:89] found id: ""
	I1126 20:13:37.890324   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.890347   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:37.890370   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:37.890389   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:37.915303   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:37.915334   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:38.015981   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:38.016018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:38.028479   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:38.028518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:38.117235   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:38.107607   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.108494   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.110529   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.111224   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.112955   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:38.107607   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.108494   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.110529   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.111224   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.112955   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:38.117268   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:38.117293   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:38.146073   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:38.146106   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:38.223055   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:38.223091   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:38.256738   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:38.256769   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:38.284204   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:38.284234   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:38.322205   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:38.322237   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:38.365768   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:38.365800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:40.946037   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:40.957084   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:40.957219   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:40.988160   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:40.988223   59960 cri.go:89] found id: ""
	I1126 20:13:40.988247   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:40.988330   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:40.991862   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:40.991975   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:41.021645   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:41.021671   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:41.021676   59960 cri.go:89] found id: ""
	I1126 20:13:41.021683   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:41.021776   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.025458   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.028751   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:41.028818   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:41.055272   59960 cri.go:89] found id: ""
	I1126 20:13:41.055297   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.055306   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:41.055313   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:41.055373   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:41.083272   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:41.083293   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:41.083298   59960 cri.go:89] found id: ""
	I1126 20:13:41.083306   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:41.083361   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.089116   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.092770   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:41.092882   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:41.119939   59960 cri.go:89] found id: ""
	I1126 20:13:41.119969   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.119978   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:41.119985   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:41.120085   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:41.149635   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:41.149657   59960 cri.go:89] found id: ""
	I1126 20:13:41.149666   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:41.149719   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.153346   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:41.153420   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:41.180294   59960 cri.go:89] found id: ""
	I1126 20:13:41.180320   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.180329   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:41.180338   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:41.180350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:41.207608   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:41.207638   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:41.250184   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:41.250217   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:41.280787   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:41.280815   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:41.350595   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:41.339246   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.340025   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.341777   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.342622   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.345147   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:41.339246   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.340025   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.341777   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.342622   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.345147   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:41.350618   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:41.350631   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:41.395571   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:41.395607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:41.471537   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:41.471576   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:41.503158   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:41.503187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:41.581612   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:41.581647   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:41.616210   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:41.616238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:41.712278   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:41.712311   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:44.224835   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:44.235354   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:44.235427   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:44.262020   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:44.262040   59960 cri.go:89] found id: ""
	I1126 20:13:44.262047   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:44.262100   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.266500   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:44.266621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:44.293469   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:44.293492   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:44.293498   59960 cri.go:89] found id: ""
	I1126 20:13:44.293515   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:44.293592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.297513   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.301293   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:44.301379   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:44.331229   59960 cri.go:89] found id: ""
	I1126 20:13:44.331252   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.331260   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:44.331266   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:44.331326   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:44.358510   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:44.358529   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:44.358534   59960 cri.go:89] found id: ""
	I1126 20:13:44.358540   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:44.358597   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.362369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.365719   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:44.365788   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:44.401237   59960 cri.go:89] found id: ""
	I1126 20:13:44.401303   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.401326   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:44.401348   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:44.401437   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:44.428506   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:44.428524   59960 cri.go:89] found id: ""
	I1126 20:13:44.428537   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:44.428592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.432302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:44.432379   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:44.461193   59960 cri.go:89] found id: ""
	I1126 20:13:44.461216   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.461225   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:44.461234   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:44.461245   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:44.472842   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:44.472911   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:44.552602   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:44.536833   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.537581   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.546763   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.547452   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.548655   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:44.536833   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.537581   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.546763   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.547452   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.548655   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:44.552629   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:44.552642   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:44.579143   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:44.579171   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:44.608447   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:44.608472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:44.634421   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:44.634447   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:44.669334   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:44.669362   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:44.770710   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:44.770785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:44.815986   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:44.816016   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:44.860293   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:44.860327   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:44.936110   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:44.936144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:47.514839   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:47.528244   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:47.528398   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:47.557240   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:47.557263   59960 cri.go:89] found id: ""
	I1126 20:13:47.557271   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:47.557328   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.561044   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:47.561146   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:47.586866   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:47.586888   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:47.586894   59960 cri.go:89] found id: ""
	I1126 20:13:47.586901   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:47.586956   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.591194   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.594829   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:47.594905   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:47.621081   59960 cri.go:89] found id: ""
	I1126 20:13:47.621104   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.621113   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:47.621120   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:47.621182   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:47.649583   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:47.649605   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:47.649610   59960 cri.go:89] found id: ""
	I1126 20:13:47.649618   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:47.649673   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.655090   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.659029   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:47.659096   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:47.685101   59960 cri.go:89] found id: ""
	I1126 20:13:47.685125   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.685134   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:47.685141   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:47.685198   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:47.712581   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:47.712603   59960 cri.go:89] found id: ""
	I1126 20:13:47.712612   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:47.712673   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.716384   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:47.716461   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:47.746287   59960 cri.go:89] found id: ""
	I1126 20:13:47.746321   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.746330   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:47.746357   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:47.746375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:47.776577   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:47.776607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:47.810845   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:47.810874   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:47.851317   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:47.851350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:47.897021   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:47.897054   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:47.925761   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:47.925792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:47.953836   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:47.953863   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:48.054533   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:48.054569   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:48.074474   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:48.074505   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:48.148938   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:48.137331   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.137950   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.139682   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.140242   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.143726   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:48.137331   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.137950   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.139682   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.140242   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.143726   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:48.148963   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:48.148977   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:48.231199   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:48.231234   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:50.823233   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:50.833805   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:50.833878   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:50.862309   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:50.862333   59960 cri.go:89] found id: ""
	I1126 20:13:50.862342   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:50.862396   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.865957   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:50.866034   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:50.892542   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:50.892565   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:50.892571   59960 cri.go:89] found id: ""
	I1126 20:13:50.892578   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:50.892632   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.896328   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.899831   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:50.899905   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:50.931031   59960 cri.go:89] found id: ""
	I1126 20:13:50.931098   59960 logs.go:282] 0 containers: []
	W1126 20:13:50.931112   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:50.931119   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:50.931176   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:50.958547   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:50.958580   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:50.958586   59960 cri.go:89] found id: ""
	I1126 20:13:50.958594   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:50.958649   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.962711   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.966380   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:50.966453   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:50.998188   59960 cri.go:89] found id: ""
	I1126 20:13:50.998483   59960 logs.go:282] 0 containers: []
	W1126 20:13:50.998498   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:50.998505   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:50.998592   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:51.031422   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:51.031447   59960 cri.go:89] found id: ""
	I1126 20:13:51.031462   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:51.031519   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:51.035715   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:51.035788   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:51.077429   59960 cri.go:89] found id: ""
	I1126 20:13:51.077452   59960 logs.go:282] 0 containers: []
	W1126 20:13:51.077460   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:51.077469   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:51.077481   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:51.105578   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:51.105609   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:51.188473   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:51.188518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:51.220853   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:51.220886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:51.304811   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:51.304848   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:51.337094   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:51.337162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:51.434145   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:51.434183   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:51.474781   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:51.474815   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:51.523360   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:51.523390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:51.556210   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:51.556238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:51.568960   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:51.568989   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:51.646125   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:51.637986   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.638634   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640319   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640884   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.642607   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:51.637986   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.638634   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640319   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640884   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.642607   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:54.147140   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:54.159570   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:54.159641   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:54.190129   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:54.190150   59960 cri.go:89] found id: ""
	I1126 20:13:54.190158   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:54.190221   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.193723   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:54.193795   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:54.221859   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:54.221881   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:54.221886   59960 cri.go:89] found id: ""
	I1126 20:13:54.221893   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:54.221986   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.225619   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.229615   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:54.229686   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:54.257427   59960 cri.go:89] found id: ""
	I1126 20:13:54.257454   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.257464   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:54.257470   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:54.257528   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:54.283499   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:54.283522   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:54.283528   59960 cri.go:89] found id: ""
	I1126 20:13:54.283535   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:54.283591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.287279   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.291072   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:54.291164   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:54.320377   59960 cri.go:89] found id: ""
	I1126 20:13:54.320409   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.320418   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:54.320424   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:54.320490   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:54.346357   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:54.346388   59960 cri.go:89] found id: ""
	I1126 20:13:54.346397   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:54.346453   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.350217   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:54.350337   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:54.387000   59960 cri.go:89] found id: ""
	I1126 20:13:54.387033   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.387042   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:54.387052   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:54.387064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:54.398981   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:54.399006   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:54.424733   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:54.424761   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:54.464124   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:54.464199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:54.516097   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:54.516149   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:54.597621   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:54.597656   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:54.626882   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:54.626916   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:54.706226   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:54.706262   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:54.777575   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:54.768229   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.769042   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.770705   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.771452   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.773075   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:54.768229   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.769042   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.770705   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.771452   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.773075   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:54.777599   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:54.777612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:54.808526   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:54.808556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:54.839385   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:54.839412   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:57.435357   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:57.446250   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:57.446321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:57.476511   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:57.476531   59960 cri.go:89] found id: ""
	I1126 20:13:57.476539   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:57.476595   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.480521   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:57.480599   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:57.508216   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:57.508239   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:57.508244   59960 cri.go:89] found id: ""
	I1126 20:13:57.508251   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:57.508312   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.512264   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.515930   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:57.516007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:57.546712   59960 cri.go:89] found id: ""
	I1126 20:13:57.546737   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.546746   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:57.546753   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:57.546811   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:57.575286   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:57.575308   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:57.575314   59960 cri.go:89] found id: ""
	I1126 20:13:57.575321   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:57.575403   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.579177   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.582844   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:57.582947   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:57.610240   59960 cri.go:89] found id: ""
	I1126 20:13:57.610268   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.610276   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:57.610282   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:57.610366   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:57.637690   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:57.637715   59960 cri.go:89] found id: ""
	I1126 20:13:57.637722   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:57.637804   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.641691   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:57.641816   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:57.673478   59960 cri.go:89] found id: ""
	I1126 20:13:57.673512   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.673521   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:57.673546   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:57.673565   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:57.724644   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:57.724677   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:57.801587   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:57.801622   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:57.846990   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:57.847020   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:57.948301   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:57.948336   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:57.960477   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:57.960510   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:58.036195   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:58.028003   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.028530   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030166   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030875   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.032666   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:58.028003   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.028530   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030166   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030875   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.032666   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:58.036262   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:58.036289   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:58.071247   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:58.071284   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:58.102552   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:58.102582   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:58.131358   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:58.131450   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:58.207844   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:58.207883   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:00.754664   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:00.765702   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:00.765771   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:00.806554   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:00.806579   59960 cri.go:89] found id: ""
	I1126 20:14:00.806587   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:00.806641   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.810501   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:00.810586   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:00.838112   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:00.838139   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:00.838144   59960 cri.go:89] found id: ""
	I1126 20:14:00.838152   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:00.838207   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.842001   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.845613   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:00.845684   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:00.874701   59960 cri.go:89] found id: ""
	I1126 20:14:00.874726   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.874735   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:00.874742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:00.874821   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:00.903003   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:00.903027   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:00.903032   59960 cri.go:89] found id: ""
	I1126 20:14:00.903039   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:00.903097   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.907398   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.911095   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:00.911169   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:00.937717   59960 cri.go:89] found id: ""
	I1126 20:14:00.937741   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.937750   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:00.937757   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:00.937815   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:00.964659   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:00.964683   59960 cri.go:89] found id: ""
	I1126 20:14:00.964692   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:00.964761   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.969052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:00.969128   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:00.996896   59960 cri.go:89] found id: ""
	I1126 20:14:00.996921   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.996930   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:00.996940   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:00.996968   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:01.052982   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:01.053013   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:01.164358   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:01.164396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:01.245847   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:01.237260   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.238200   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.239244   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.240970   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.241435   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:01.237260   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.238200   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.239244   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.240970   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.241435   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:01.245874   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:01.245888   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:01.278036   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:01.278066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:01.321761   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:01.321798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:01.349850   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:01.349877   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:01.362087   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:01.362115   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:01.406110   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:01.406143   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:01.488538   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:01.488580   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:01.524108   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:01.524314   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:04.107171   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:04.119134   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:04.119206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:04.150892   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:04.150913   59960 cri.go:89] found id: ""
	I1126 20:14:04.150920   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:04.150993   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.154614   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:04.154713   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:04.181842   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:04.181866   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:04.181870   59960 cri.go:89] found id: ""
	I1126 20:14:04.181878   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:04.181958   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.185706   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.189884   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:04.190033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:04.217117   59960 cri.go:89] found id: ""
	I1126 20:14:04.217143   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.217152   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:04.217159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:04.217218   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:04.244873   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:04.244893   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:04.244897   59960 cri.go:89] found id: ""
	I1126 20:14:04.244904   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:04.244962   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.248633   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.252113   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:04.252223   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:04.281381   59960 cri.go:89] found id: ""
	I1126 20:14:04.281410   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.281420   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:04.281426   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:04.281484   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:04.309793   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:04.309817   59960 cri.go:89] found id: ""
	I1126 20:14:04.309825   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:04.309881   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.313555   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:04.313625   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:04.341073   59960 cri.go:89] found id: ""
	I1126 20:14:04.341100   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.341109   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:04.341117   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:04.341129   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:04.436704   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:04.436741   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:04.511848   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:04.500099   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.500700   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506376   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506925   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.508357   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:04.500099   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.500700   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506376   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506925   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.508357   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:04.511872   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:04.511887   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:04.572587   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:04.572662   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:04.622150   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:04.622182   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:04.648129   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:04.648200   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:04.736436   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:04.736472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:04.748750   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:04.748783   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:04.784731   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:04.784756   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:04.861032   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:04.861067   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:04.888273   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:04.888306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:07.422077   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:07.432698   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:07.432776   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:07.463525   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:07.463545   59960 cri.go:89] found id: ""
	I1126 20:14:07.463553   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:07.463605   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.467175   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:07.467243   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:07.497801   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:07.497821   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:07.497826   59960 cri.go:89] found id: ""
	I1126 20:14:07.497833   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:07.497888   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.501759   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.505120   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:07.505198   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:07.539084   59960 cri.go:89] found id: ""
	I1126 20:14:07.539112   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.539121   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:07.539127   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:07.539189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:07.567688   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:07.567713   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:07.567720   59960 cri.go:89] found id: ""
	I1126 20:14:07.567727   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:07.567788   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.571445   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.575895   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:07.575973   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:07.603679   59960 cri.go:89] found id: ""
	I1126 20:14:07.603704   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.603713   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:07.603720   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:07.603801   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:07.633845   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:07.633869   59960 cri.go:89] found id: ""
	I1126 20:14:07.633877   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:07.633982   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.638439   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:07.638510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:07.669305   59960 cri.go:89] found id: ""
	I1126 20:14:07.669329   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.669338   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:07.669348   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:07.669361   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:07.746001   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:07.746039   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:07.773829   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:07.773859   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:07.806673   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:07.806705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:07.847992   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:07.848029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:07.876479   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:07.876507   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:07.952982   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:07.953018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:08.054195   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:08.054235   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:08.071790   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:08.071819   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:08.158168   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:08.148798   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.150262   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.151831   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.152401   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.154098   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:08.148798   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.150262   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.151831   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.152401   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.154098   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:08.158237   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:08.158266   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:08.185227   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:08.185257   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:10.730401   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:10.741460   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:10.741529   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:10.774241   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:10.774263   59960 cri.go:89] found id: ""
	I1126 20:14:10.774270   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:10.774327   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.778033   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:10.778103   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:10.806991   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:10.807015   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:10.807021   59960 cri.go:89] found id: ""
	I1126 20:14:10.807028   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:10.807083   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.810846   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.814441   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:10.814513   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:10.843200   59960 cri.go:89] found id: ""
	I1126 20:14:10.843226   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.843236   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:10.843242   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:10.843301   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:10.871039   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:10.871062   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:10.871068   59960 cri.go:89] found id: ""
	I1126 20:14:10.871075   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:10.871129   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.874747   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.878577   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:10.878661   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:10.907317   59960 cri.go:89] found id: ""
	I1126 20:14:10.907343   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.907352   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:10.907359   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:10.907414   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:10.936274   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:10.936297   59960 cri.go:89] found id: ""
	I1126 20:14:10.936306   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:10.936385   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.939976   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:10.940048   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:10.969776   59960 cri.go:89] found id: ""
	I1126 20:14:10.969848   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.969884   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:10.969911   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:10.969997   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:11.067923   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:11.067964   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:11.082749   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:11.082781   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:11.124244   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:11.124281   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:11.173196   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:11.173232   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:11.200233   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:11.200268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:11.284292   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:11.284327   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:11.317517   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:11.317545   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:11.395020   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:11.386165   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.387087   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388651   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388979   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.390832   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:11.386165   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.387087   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388651   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388979   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.390832   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:11.395043   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:11.395056   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:11.422025   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:11.422059   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:11.500554   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:11.500588   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.028990   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:14.043196   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:14.043275   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:14.078393   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:14.078418   59960 cri.go:89] found id: ""
	I1126 20:14:14.078426   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:14.078485   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.082581   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:14.082679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:14.113586   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:14.113611   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:14.113616   59960 cri.go:89] found id: ""
	I1126 20:14:14.113623   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:14.113677   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.117367   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.120847   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:14.120921   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:14.147191   59960 cri.go:89] found id: ""
	I1126 20:14:14.147214   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.147222   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:14.147229   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:14.147287   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:14.173461   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:14.173483   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.173489   59960 cri.go:89] found id: ""
	I1126 20:14:14.173496   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:14.173560   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.177359   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.180846   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:14.180926   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:14.211699   59960 cri.go:89] found id: ""
	I1126 20:14:14.211731   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.211740   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:14.211747   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:14.211815   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:14.245320   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:14.245343   59960 cri.go:89] found id: ""
	I1126 20:14:14.245352   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:14.245422   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.249066   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:14.249133   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:14.277385   59960 cri.go:89] found id: ""
	I1126 20:14:14.277407   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.277415   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:14.277424   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:14.277436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:14.289839   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:14.289866   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:14.361142   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:14.352896   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.353542   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355081   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355655   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.357173   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:14.352896   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.353542   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355081   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355655   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.357173   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:14.361165   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:14.361179   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:14.419666   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:14.419762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:14.468633   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:14.468667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:14.557664   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:14.557696   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:14.583538   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:14.583567   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.612806   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:14.612834   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:14.638272   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:14.638300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:14.721230   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:14.721268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:14.755109   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:14.755142   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:17.358125   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:17.371898   59960 out.go:203] 
	W1126 20:14:17.375212   59960 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1126 20:14:17.375248   59960 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1126 20:14:17.375258   59960 out.go:285] * Related issues:
	W1126 20:14:17.375279   59960 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1126 20:14:17.375299   59960 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1126 20:14:17.378409   59960 out.go:203] 
	
	
	==> CRI-O <==
	Nov 26 20:07:27 ha-278127 crio[667]: time="2025-11-26T20:07:27.974719211Z" level=info msg="Started container" PID=1450 containerID=0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee description=kube-system/kube-controller-manager-ha-278127/kube-controller-manager id=87dec93c-7b21-4bf6-943c-261f225c113f name=/runtime.v1.RuntimeService/StartContainer sandboxID=aaf24b4012ae22573565b29a9c87fa6c77cadf206a779d5e6c1de76d289f128f
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.929319714Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ec2c398f-23e5-463c-bbb1-09030f312307 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.930440903Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8fc66d00-8c37-4d25-84c6-7d7ac1c54ce3 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.932121756Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5c15308b-e98f-4109-8cbc-9192ac697f01 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.932226698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.940571173Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.940960238Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8f34edad928de60e13d64480bf036aa1cf6b11ecfb7c751ef02ef81267e506bc/merged/etc/passwd: no such file or directory"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.941066542Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f34edad928de60e13d64480bf036aa1cf6b11ecfb7c751ef02ef81267e506bc/merged/etc/group: no such file or directory"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.941381721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.959928416Z" level=info msg="Created container 1de9ee4cdf6523ba82be553073f7f95b567b3080cf0b35a8910ac6dcf51abbd5: kube-system/storage-provisioner/storage-provisioner" id=5c15308b-e98f-4109-8cbc-9192ac697f01 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.960936581Z" level=info msg="Starting container: 1de9ee4cdf6523ba82be553073f7f95b567b3080cf0b35a8910ac6dcf51abbd5" id=51eb399f-be44-48a0-a1b4-1c62267c418c name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.967526563Z" level=info msg="Started container" PID=1462 containerID=1de9ee4cdf6523ba82be553073f7f95b567b3080cf0b35a8910ac6dcf51abbd5 description=kube-system/storage-provisioner/storage-provisioner id=51eb399f-be44-48a0-a1b4-1c62267c418c name=/runtime.v1.RuntimeService/StartContainer sandboxID=21dd814126bdbbb8dab349806b778ddb306dc5100a35c1bd2fe40c8004bcd523
	Nov 26 20:07:44 ha-278127 conmon[1447]: conmon 0e221d151c3ca5256368 <ninfo>: container 1450 exited with status 1
	Nov 26 20:07:45 ha-278127 crio[667]: time="2025-11-26T20:07:45.240819859Z" level=info msg="Removing container: c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9" id=6f335103-7e48-492e-b33a-d6d488e111fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:07:45 ha-278127 crio[667]: time="2025-11-26T20:07:45.256615675Z" level=info msg="Error loading conmon cgroup of container c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9: cgroup deleted" id=6f335103-7e48-492e-b33a-d6d488e111fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:07:45 ha-278127 crio[667]: time="2025-11-26T20:07:45.261280075Z" level=info msg="Removed container c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9: kube-system/kube-controller-manager-ha-278127/kube-controller-manager" id=6f335103-7e48-492e-b33a-d6d488e111fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.929977452Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c9fc5566-53be-4e3a-ad5b-047dfe5df6f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.931894512Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c6b73409-e91d-4450-8804-870ca6e0b63d name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.933188155Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-278127/kube-controller-manager" id=b5b42e4a-b813-4466-87cd-d441eaaf849b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.933308096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.94134128Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.942037763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.965749324Z" level=info msg="Created container b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca: kube-system/kube-controller-manager-ha-278127/kube-controller-manager" id=b5b42e4a-b813-4466-87cd-d441eaaf849b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.966758303Z" level=info msg="Starting container: b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca" id=d8573d49-5a20-4657-b169-a7727449cf6d name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.975098568Z" level=info msg="Started container" PID=1498 containerID=b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca description=kube-system/kube-controller-manager-ha-278127/kube-controller-manager id=d8573d49-5a20-4657-b169-a7727449cf6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=aaf24b4012ae22573565b29a9c87fa6c77cadf206a779d5e6c1de76d289f128f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	b3d2b3bea3b9f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Running             kube-controller-manager   6                   aaf24b4012ae2       kube-controller-manager-ha-278127   kube-system
	1de9ee4cdf652       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Running             storage-provisioner       5                   21dd814126bdb       storage-provisioner                 kube-system
	0e221d151c3ca       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   5                   aaf24b4012ae2       kube-controller-manager-ha-278127   kube-system
	1a9b5dae15334       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       4                   21dd814126bdb       storage-provisioner                 kube-system
	1622dad7c067a       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Running             kube-vip                  3                   d4cb99de55854       kube-vip-ha-278127                  kube-system
	822876229de0f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   dfdbe4360041c       coredns-66bc5c9577-ndh8k            kube-system
	aef907239d286       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   2                   78d3fb27335b4       busybox-7b57f96db7-vwpd8            default
	787754735cfed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   89e2c226e09e6       coredns-66bc5c9577-bbpk7            kube-system
	d140d1950675e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               2                   b9a376ab09c3c       kindnet-gp24m                       kube-system
	7b45294efb449       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                2                   55fa9dab05c0d       kube-proxy-5fndw                    kube-system
	f5647f1652cc1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   7 minutes ago       Running             kube-apiserver            3                   c932fd4498a66       kube-apiserver-ha-278127            kube-system
	040a854900180       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            2                   773a6356cec93       kube-scheduler-ha-278127            kube-system
	106da3c0ad4fa       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Exited              kube-vip                  2                   d4cb99de55854       kube-vip-ha-278127                  kube-system
	cdc1651fea8f1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      2                   11d5891e684b3       etcd-ha-278127                      kube-system
	
	
	==> coredns [787754735cfed2e99ff1e0336a870da9b5e17eaed8d9d79b97dbfa75dd83059c] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45898 - 29384 "HINFO IN 3170256484025904488.3791759156995599050. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014293297s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [822876229de0f6cb25db3449774153712b72a0c129090a61a1aeadc760c6cad4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53615 - 2115 "HINFO IN 6991506871979899616.8642824612935885209. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017055518s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-278127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T19_58_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:58:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:14:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:13:01 +0000   Wed, 26 Nov 2025 19:58:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:13:01 +0000   Wed, 26 Nov 2025 19:58:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:13:01 +0000   Wed, 26 Nov 2025 19:58:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:13:01 +0000   Wed, 26 Nov 2025 19:59:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-278127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                370e19a1-8269-418f-82ce-e7791d2f9cc5
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vwpd8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-bbpk7             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 coredns-66bc5c9577-ndh8k             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-ha-278127                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-gp24m                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-278127             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-278127    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-5fndw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-278127             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-278127                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m38s                  kube-proxy       
	  Normal   Starting                 9m30s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)      kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-278127 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m27s                  node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           9m26s                  node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           8m56s                  node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   Starting                 7m49s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m49s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m48s (x8 over 7m49s)  kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m48s (x8 over 7m49s)  kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m48s (x8 over 7m49s)  kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m1s                   node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	
	
	Name:               ha-278127-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_26T19_58_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:58:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:05:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-278127-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                77d88c20-b1f3-431d-ace6-24a69c640dde
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-72bpv                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-278127-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-x82cz                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-278127-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-278127-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-p4455                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-278127-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-278127-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 9m11s              kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           15m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-278127-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             11m                node-controller  Node ha-278127-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           10m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-278127-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           9m27s              node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           9m26s              node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           8m56s              node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           6m1s               node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   NodeNotReady             5m11s              node-controller  Node ha-278127-m02 status is now: NodeNotReady
	
	
	Name:               ha-278127-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_26T20_01_35_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:01:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:05:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-278127-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                4949defc-dfd6-4bc6-9c78-3cb968da2b3e
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hqq6q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  kube-system                 kindnet-qbd6w               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-proxy-d4p99            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 8m42s                kube-proxy       
	  Normal   Starting                 12m                  kube-proxy       
	  Warning  CgroupV1                 12m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)    kubelet          Node ha-278127-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)    kubelet          Node ha-278127-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)    kubelet          Node ha-278127-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 12m                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                  node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           12m                  node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           12m                  node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   NodeReady                12m                  kubelet          Node ha-278127-m04 status is now: NodeReady
	  Normal   RegisteredNode           10m                  node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           9m27s                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           9m26s                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   Starting                 9m5s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m5s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m2s (x8 over 9m5s)  kubelet          Node ha-278127-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m2s (x8 over 9m5s)  kubelet          Node ha-278127-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m2s (x8 over 9m5s)  kubelet          Node ha-278127-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m56s                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           6m1s                 node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   NodeNotReady             5m11s                node-controller  Node ha-278127-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Nov26 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014220] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507172] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032749] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.773464] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.697672] kauditd_printk_skb: 36 callbacks suppressed
	[Nov26 19:37] overlayfs: idmapped layers are currently not supported
	[  +0.074077] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov26 19:39] hrtimer: interrupt took 16123050 ns
	[Nov26 19:43] overlayfs: idmapped layers are currently not supported
	[Nov26 19:44] overlayfs: idmapped layers are currently not supported
	[Nov26 19:58] overlayfs: idmapped layers are currently not supported
	[ +33.942210] overlayfs: idmapped layers are currently not supported
	[Nov26 19:59] overlayfs: idmapped layers are currently not supported
	[Nov26 20:01] overlayfs: idmapped layers are currently not supported
	[Nov26 20:02] overlayfs: idmapped layers are currently not supported
	[Nov26 20:04] overlayfs: idmapped layers are currently not supported
	[  +3.105496] overlayfs: idmapped layers are currently not supported
	[ +37.228314] overlayfs: idmapped layers are currently not supported
	[Nov26 20:05] overlayfs: idmapped layers are currently not supported
	[Nov26 20:06] overlayfs: idmapped layers are currently not supported
	[  +3.713866] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cdc1651fea8f10bd665928dcc7bb174b74385eb06e911da9629df17c0d9d29e8] <==
	{"level":"info","ts":"2025-11-26T20:08:15.335606Z","caller":"traceutil/trace.go:172","msg":"trace[1383728067] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:2; response_revision:2566; }","duration":"123.275763ms","start":"2025-11-26T20:08:15.212323Z","end":"2025-11-26T20:08:15.335599Z","steps":["trace[1383728067] 'agreement among raft nodes before linearized reading'  (duration: 123.198694ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.351724Z","caller":"traceutil/trace.go:172","msg":"trace[1874297602] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2567; }","duration":"115.025762ms","start":"2025-11-26T20:08:15.236689Z","end":"2025-11-26T20:08:15.351715Z","steps":["trace[1874297602] 'agreement among raft nodes before linearized reading'  (duration: 114.988281ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.353572Z","caller":"traceutil/trace.go:172","msg":"trace[590005640] range","detail":"{range_begin:/registry/cronjobs; range_end:; response_count:0; response_revision:2567; }","duration":"117.001923ms","start":"2025-11-26T20:08:15.236561Z","end":"2025-11-26T20:08:15.353563Z","steps":["trace[590005640] 'agreement among raft nodes before linearized reading'  (duration: 116.956164ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.353840Z","caller":"traceutil/trace.go:172","msg":"trace[1252963882] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:2567; }","duration":"117.289377ms","start":"2025-11-26T20:08:15.236544Z","end":"2025-11-26T20:08:15.353834Z","steps":["trace[1252963882] 'agreement among raft nodes before linearized reading'  (duration: 117.256032ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.353913Z","caller":"traceutil/trace.go:172","msg":"trace[297213381] range","detail":"{range_begin:/registry/roles; range_end:; response_count:0; response_revision:2567; }","duration":"117.437904ms","start":"2025-11-26T20:08:15.236470Z","end":"2025-11-26T20:08:15.353908Z","steps":["trace[297213381] 'agreement among raft nodes before linearized reading'  (duration: 117.416234ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.364849Z","caller":"traceutil/trace.go:172","msg":"trace[1421861513] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:59; response_revision:2567; }","duration":"128.412849ms","start":"2025-11-26T20:08:15.236425Z","end":"2025-11-26T20:08:15.364838Z","steps":["trace[1421861513] 'agreement among raft nodes before linearized reading'  (duration: 128.131786ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.364893Z","caller":"traceutil/trace.go:172","msg":"trace[1461250281] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:2567; }","duration":"128.480491ms","start":"2025-11-26T20:08:15.236409Z","end":"2025-11-26T20:08:15.364889Z","steps":["trace[1461250281] 'agreement among raft nodes before linearized reading'  (duration: 128.461948ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.364921Z","caller":"traceutil/trace.go:172","msg":"trace[502786890] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:2567; }","duration":"128.524388ms","start":"2025-11-26T20:08:15.236393Z","end":"2025-11-26T20:08:15.364917Z","steps":["trace[502786890] 'agreement among raft nodes before linearized reading'  (duration: 128.51112ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.364974Z","caller":"traceutil/trace.go:172","msg":"trace[1598355909] range","detail":"{range_begin:/registry/ipaddresses/; range_end:/registry/ipaddresses0; response_count:2; response_revision:2567; }","duration":"128.579657ms","start":"2025-11-26T20:08:15.236389Z","end":"2025-11-26T20:08:15.364969Z","steps":["trace[1598355909] 'agreement among raft nodes before linearized reading'  (duration: 128.540937ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365001Z","caller":"traceutil/trace.go:172","msg":"trace[640320053] range","detail":"{range_begin:/registry/daemonsets; range_end:; response_count:0; response_revision:2567; }","duration":"128.6531ms","start":"2025-11-26T20:08:15.236344Z","end":"2025-11-26T20:08:15.364998Z","steps":["trace[640320053] 'agreement among raft nodes before linearized reading'  (duration: 128.639283ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365081Z","caller":"traceutil/trace.go:172","msg":"trace[703339521] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2567; }","duration":"128.762571ms","start":"2025-11-26T20:08:15.236311Z","end":"2025-11-26T20:08:15.365074Z","steps":["trace[703339521] 'agreement among raft nodes before linearized reading'  (duration: 128.697349ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365157Z","caller":"traceutil/trace.go:172","msg":"trace[879094705] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:4; response_revision:2567; }","duration":"128.947693ms","start":"2025-11-26T20:08:15.236204Z","end":"2025-11-26T20:08:15.365152Z","steps":["trace[879094705] 'agreement among raft nodes before linearized reading'  (duration: 128.887427ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365183Z","caller":"traceutil/trace.go:172","msg":"trace[1712061630] range","detail":"{range_begin:/registry/ingress; range_end:; response_count:0; response_revision:2567; }","duration":"129.057033ms","start":"2025-11-26T20:08:15.236122Z","end":"2025-11-26T20:08:15.365179Z","steps":["trace[1712061630] 'agreement among raft nodes before linearized reading'  (duration: 129.044151ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365210Z","caller":"traceutil/trace.go:172","msg":"trace[884725043] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations; range_end:; response_count:0; response_revision:2567; }","duration":"130.176311ms","start":"2025-11-26T20:08:15.235029Z","end":"2025-11-26T20:08:15.365206Z","steps":["trace[884725043] 'agreement among raft nodes before linearized reading'  (duration: 130.162199ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365235Z","caller":"traceutil/trace.go:172","msg":"trace[1960126933] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:2567; }","duration":"138.218251ms","start":"2025-11-26T20:08:15.227012Z","end":"2025-11-26T20:08:15.365231Z","steps":["trace[1960126933] 'agreement among raft nodes before linearized reading'  (duration: 138.206222ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365306Z","caller":"traceutil/trace.go:172","msg":"trace[700774855] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:12; response_revision:2567; }","duration":"138.316595ms","start":"2025-11-26T20:08:15.226986Z","end":"2025-11-26T20:08:15.365302Z","steps":["trace[700774855] 'agreement among raft nodes before linearized reading'  (duration: 138.256756ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365332Z","caller":"traceutil/trace.go:172","msg":"trace[1878756393] range","detail":"{range_begin:/registry/resourceclaims/; range_end:/registry/resourceclaims0; response_count:0; response_revision:2567; }","duration":"138.360049ms","start":"2025-11-26T20:08:15.226968Z","end":"2025-11-26T20:08:15.365328Z","steps":["trace[1878756393] 'agreement among raft nodes before linearized reading'  (duration: 138.347619ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365357Z","caller":"traceutil/trace.go:172","msg":"trace[2116024509] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:2567; }","duration":"138.462432ms","start":"2025-11-26T20:08:15.226891Z","end":"2025-11-26T20:08:15.365354Z","steps":["trace[2116024509] 'agreement among raft nodes before linearized reading'  (duration: 138.449927ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365434Z","caller":"traceutil/trace.go:172","msg":"trace[1377873000] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:11; response_revision:2567; }","duration":"138.557683ms","start":"2025-11-26T20:08:15.226872Z","end":"2025-11-26T20:08:15.365429Z","steps":["trace[1377873000] 'agreement among raft nodes before linearized reading'  (duration: 138.494029ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365486Z","caller":"traceutil/trace.go:172","msg":"trace[251490351] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:4; response_revision:2567; }","duration":"138.671406ms","start":"2025-11-26T20:08:15.226810Z","end":"2025-11-26T20:08:15.365482Z","steps":["trace[251490351] 'agreement among raft nodes before linearized reading'  (duration: 138.633211ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365537Z","caller":"traceutil/trace.go:172","msg":"trace[570012177] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:2; response_revision:2567; }","duration":"138.744439ms","start":"2025-11-26T20:08:15.226789Z","end":"2025-11-26T20:08:15.365533Z","steps":["trace[570012177] 'agreement among raft nodes before linearized reading'  (duration: 138.706334ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365586Z","caller":"traceutil/trace.go:172","msg":"trace[1618327843] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:3; response_revision:2567; }","duration":"138.820441ms","start":"2025-11-26T20:08:15.226762Z","end":"2025-11-26T20:08:15.365583Z","steps":["trace[1618327843] 'agreement among raft nodes before linearized reading'  (duration: 138.784002ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365726Z","caller":"traceutil/trace.go:172","msg":"trace[1190967021] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:44; response_revision:2567; }","duration":"138.982458ms","start":"2025-11-26T20:08:15.226740Z","end":"2025-11-26T20:08:15.365722Z","steps":["trace[1190967021] 'agreement among raft nodes before linearized reading'  (duration: 138.855731ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365752Z","caller":"traceutil/trace.go:172","msg":"trace[191199000] range","detail":"{range_begin:/registry/ipaddresses; range_end:; response_count:0; response_revision:2567; }","duration":"139.0245ms","start":"2025-11-26T20:08:15.226723Z","end":"2025-11-26T20:08:15.365747Z","steps":["trace[191199000] 'agreement among raft nodes before linearized reading'  (duration: 139.012775ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365777Z","caller":"traceutil/trace.go:172","msg":"trace[338323478] range","detail":"{range_begin:/registry/deviceclasses/; range_end:/registry/deviceclasses0; response_count:0; response_revision:2567; }","duration":"139.071482ms","start":"2025-11-26T20:08:15.226701Z","end":"2025-11-26T20:08:15.365773Z","steps":["trace[338323478] 'agreement among raft nodes before linearized reading'  (duration: 139.05988ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:14:21 up 56 min,  0 user,  load average: 0.85, 1.08, 1.22
	Linux ha-278127 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d140d1950675ee8ccd9c84ef7a5a7da1b1e44300cc3e3a958c71e1138816061f] <==
	I1126 20:13:32.226370       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:13:42.226249       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:13:42.226389       1 main.go:301] handling current node
	I1126 20:13:42.226473       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:13:42.226511       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:13:42.226742       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:13:42.226790       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:13:52.226003       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:13:52.226037       1 main.go:301] handling current node
	I1126 20:13:52.226054       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:13:52.226060       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:13:52.226201       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:13:52.226262       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:14:02.232091       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:14:02.232128       1 main.go:301] handling current node
	I1126 20:14:02.232146       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:14:02.232153       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:14:02.232327       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:14:02.232341       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:14:12.226411       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:14:12.226443       1 main.go:301] handling current node
	I1126 20:14:12.226460       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:14:12.226467       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:14:12.226646       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:14:12.226661       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f5647f1652cc11a195a49a98906391e791c3136916a5e3c249907585088fad42] <==
	{"level":"warn","ts":"2025-11-26T20:08:15.185150Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019681e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185302Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400264b2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185460Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001969860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185569Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40023790e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185752Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a24960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185791Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002218000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.188111Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400089eb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.188335Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002471680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190353Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400264b2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190396Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f503c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190413Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029423c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190430Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001969860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190463Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a3b860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190481Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002378000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190499Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190513Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190529Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a24960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190727Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400089e000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	W1126 20:08:17.152713       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1126 20:08:17.154506       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:08:17.162706       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:08:19.148616       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:08:22.296241       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:09:09.201336       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:09:09.262823       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee] <==
	I1126 20:07:29.733675       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:07:30.451982       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1126 20:07:30.452014       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:07:30.453426       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1126 20:07:30.453688       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1126 20:07:30.453871       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1126 20:07:30.453945       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1126 20:07:44.473711       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca] <==
	E1126 20:08:39.054180       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:39.054188       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:39.054196       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:39.054201       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054573       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054603       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054612       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054617       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054623       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	I1126 20:08:59.075009       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mttpp"
	I1126 20:08:59.108301       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mttpp"
	I1126 20:08:59.108397       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-278127-m03"
	I1126 20:08:59.137341       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-278127-m03"
	I1126 20:08:59.137379       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-cjs7r"
	I1126 20:08:59.170242       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-cjs7r"
	I1126 20:08:59.170364       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-278127-m03"
	I1126 20:08:59.200927       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-278127-m03"
	I1126 20:08:59.201053       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-278127-m03"
	I1126 20:08:59.231029       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-278127-m03"
	I1126 20:08:59.231129       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-278127-m03"
	I1126 20:08:59.266325       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-278127-m03"
	I1126 20:08:59.266427       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-278127-m03"
	I1126 20:08:59.307467       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-278127-m03"
	I1126 20:14:09.243470       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-hqq6q"
	I1126 20:14:19.320009       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-72bpv"
	
	
	==> kube-proxy [7b45294efb44968b6b5d7d6994b3f6f118094d33ccfb9aa9a125e9d6110f41b3] <==
	I1126 20:07:27.549779       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	I1126 20:07:27.549805       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	I1126 20:07:27.549666       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	E1126 20:07:31.630334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:31.630336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:31.630470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:31.630581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:07:34.702391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:34.702403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:07:34.702509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:34.702664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:41.518262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:41.518267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:41.518397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:41.518465       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1126 20:07:41.518496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:07:52.462253       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1126 20:07:52.462312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:52.462400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:55.534388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:55.534401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:08:05.710253       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1126 20:08:08.782267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:08:11.854307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:08:14.930219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [040a8549001808f2d3fce3d4cf9f8dff272706173960c5e8004af8b1ea042e80] <==
	I1126 20:06:34.800738       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:06:39.572983       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:06:39.573028       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:06:39.573039       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:06:39.573046       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:06:39.693522       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:06:39.693624       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:06:39.703802       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:06:39.704071       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:06:39.715887       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:06:39.704092       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:06:39.816440       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:07:21 ha-278127 kubelet[805]: E1126 20:07:21.263300     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:23 ha-278127 kubelet[805]: E1126 20:07:23.240740     805 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-278127.187ba7448d330dec  default   2559 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-278127,UID:ha-278127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-278127 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-278127,},FirstTimestamp:2025-11-26 20:06:31 +0000 UTC,LastTimestamp:2025-11-26 20:06:32.032348366 +0000 UTC m=+0.308576049,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-278127,}"
	Nov 26 20:07:27 ha-278127 kubelet[805]: I1126 20:07:27.929241     805 scope.go:117] "RemoveContainer" containerID="c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9"
	Nov 26 20:07:28 ha-278127 kubelet[805]: I1126 20:07:28.928664     805 scope.go:117] "RemoveContainer" containerID="1a9b5dae1533404a7bf684e278d137906a4f310cb5682e61046be41540e6f32b"
	Nov 26 20:07:31 ha-278127 kubelet[805]: E1126 20:07:31.162433     805 controller.go:195] "Failed to update lease" err="Put \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:31 ha-278127 kubelet[805]: E1126 20:07:31.265440     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes ha-278127)"
	Nov 26 20:07:41 ha-278127 kubelet[805]: E1126 20:07:41.163428     805 controller.go:195] "Failed to update lease" err="Put \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:41 ha-278127 kubelet[805]: I1126 20:07:41.163974     805 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Nov 26 20:07:41 ha-278127 kubelet[805]: E1126 20:07:41.266735     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:41 ha-278127 kubelet[805]: E1126 20:07:41.266930     805 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count"
	Nov 26 20:07:45 ha-278127 kubelet[805]: I1126 20:07:45.237637     805 scope.go:117] "RemoveContainer" containerID="c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9"
	Nov 26 20:07:45 ha-278127 kubelet[805]: I1126 20:07:45.238084     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	Nov 26 20:07:45 ha-278127 kubelet[805]: E1126 20:07:45.238254     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-278127_kube-system(5eb8d26456c3b783869be39bb80c3519)\"" pod="kube-system/kube-controller-manager-ha-278127" podUID="5eb8d26456c3b783869be39bb80c3519"
	Nov 26 20:07:47 ha-278127 kubelet[805]: I1126 20:07:47.402612     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	Nov 26 20:07:47 ha-278127 kubelet[805]: E1126 20:07:47.402814     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-278127_kube-system(5eb8d26456c3b783869be39bb80c3519)\"" pod="kube-system/kube-controller-manager-ha-278127" podUID="5eb8d26456c3b783869be39bb80c3519"
	Nov 26 20:07:49 ha-278127 kubelet[805]: E1126 20:07:49.241093     805 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kindnet-gp24m)" podUID="4d3597e4-de22-4f29-8c58-1aaabd4a8a56" pod="kube-system/kindnet-gp24m"
	Nov 26 20:07:51 ha-278127 kubelet[805]: E1126 20:07:51.165080     805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms"
	Nov 26 20:07:57 ha-278127 kubelet[805]: E1126 20:07:57.243812     805 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-278127.187ba7448d32cbe5  default   2561 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-278127,UID:ha-278127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-278127 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-278127,},FirstTimestamp:2025-11-26 20:06:31 +0000 UTC,LastTimestamp:2025-11-26 20:06:32.033252015 +0000 UTC m=+0.309479698,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-278127,}"
	Nov 26 20:08:00 ha-278127 kubelet[805]: I1126 20:08:00.928844     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	Nov 26 20:08:00 ha-278127 kubelet[805]: E1126 20:08:00.929077     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-278127_kube-system(5eb8d26456c3b783869be39bb80c3519)\"" pod="kube-system/kube-controller-manager-ha-278127" podUID="5eb8d26456c3b783869be39bb80c3519"
	Nov 26 20:08:01 ha-278127 kubelet[805]: E1126 20:08:01.366584     805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	Nov 26 20:08:01 ha-278127 kubelet[805]: E1126 20:08:01.649883     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recurs
iveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"ha-278127\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-278127/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:08:11 ha-278127 kubelet[805]: E1126 20:08:11.650209     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:08:11 ha-278127 kubelet[805]: E1126 20:08:11.768381     805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": context deadline exceeded" interval="800ms"
	Nov 26 20:08:12 ha-278127 kubelet[805]: I1126 20:08:12.929036     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-278127 -n ha-278127
helpers_test.go:269: (dbg) Run:  kubectl --context ha-278127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-l9p24 busybox-7b57f96db7-rcsd2
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartCluster]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-278127 describe pod busybox-7b57f96db7-l9p24 busybox-7b57f96db7-rcsd2
helpers_test.go:290: (dbg) kubectl --context ha-278127 describe pod busybox-7b57f96db7-l9p24 busybox-7b57f96db7-rcsd2:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-l9p24
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jltdj (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-jltdj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  14s   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	
	
	Name:             busybox-7b57f96db7-rcsd2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zn4mp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-zn4mp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  4s    default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (478.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (5.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-278127" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-278127\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-278127\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-278127\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-278127
helpers_test.go:243: (dbg) docker inspect ha-278127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd",
	        "Created": "2025-11-26T19:57:51.94382214Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 60086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:06:25.13540784Z",
	            "FinishedAt": "2025-11-26T20:06:24.397214575Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/hosts",
	        "LogPath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd-json.log",
	        "Name": "/ha-278127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-278127:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-278127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd",
	                "LowerDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-278127",
	                "Source": "/var/lib/docker/volumes/ha-278127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-278127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-278127",
	                "name.minikube.sigs.k8s.io": "ha-278127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb3aaf333e9f66a1f0a54705c2952cf94a31e67f170d0e073ad505006b4613f7",
	            "SandboxKey": "/var/run/docker/netns/cb3aaf333e9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-278127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:6e:15:9f:21:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "20cb65a83ad57cf8581cf982a5b25f381be527698b87a783139e32a436f750e9",
	                    "EndpointID": "217fa13f4a876f9a733e9c88a45d94a8aabe2f981d6e4c092ca2c647767455d3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-278127",
	                        "0081e5a17ed5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-278127 -n ha-278127
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 logs -n 25: (2.359843634s)
E1126 20:14:28.117789    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-278127 cp ha-278127-m03:/home/docker/cp-test.txt ha-278127-m04:/home/docker/cp-test_ha-278127-m03_ha-278127-m04.txt               │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test_ha-278127-m03_ha-278127-m04.txt                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp testdata/cp-test.txt ha-278127-m04:/home/docker/cp-test.txt                                                             │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2837002730/001/cp-test_ha-278127-m04.txt │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127:/home/docker/cp-test_ha-278127-m04_ha-278127.txt                       │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127 sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127.txt                                                 │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127-m02:/home/docker/cp-test_ha-278127-m04_ha-278127-m02.txt               │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m02 sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127-m02.txt                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127-m03:/home/docker/cp-test_ha-278127-m04_ha-278127-m03.txt               │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m03 sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127-m03.txt                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ node    │ ha-278127 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ node    │ ha-278127 node start m02 --alsologtostderr -v 5                                                                                      │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:03 UTC │
	│ node    │ ha-278127 node list --alsologtostderr -v 5                                                                                           │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:03 UTC │                     │
	│ stop    │ ha-278127 stop --alsologtostderr -v 5                                                                                                │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:03 UTC │ 26 Nov 25 20:04 UTC │
	│ start   │ ha-278127 start --wait true --alsologtostderr -v 5                                                                                   │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:04 UTC │ 26 Nov 25 20:05 UTC │
	│ node    │ ha-278127 node list --alsologtostderr -v 5                                                                                           │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:05 UTC │                     │
	│ node    │ ha-278127 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:05 UTC │ 26 Nov 25 20:05 UTC │
	│ stop    │ ha-278127 stop --alsologtostderr -v 5                                                                                                │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:05 UTC │ 26 Nov 25 20:06 UTC │
	│ start   │ ha-278127 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:06 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:06:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:06:24.854734   59960 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:06:24.854900   59960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:06:24.854911   59960 out.go:374] Setting ErrFile to fd 2...
	I1126 20:06:24.854917   59960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:06:24.855178   59960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:06:24.855529   59960 out.go:368] Setting JSON to false
	I1126 20:06:24.856339   59960 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2915,"bootTime":1764184670,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:06:24.856415   59960 start.go:143] virtualization:  
	I1126 20:06:24.859567   59960 out.go:179] * [ha-278127] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:06:24.863328   59960 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:06:24.863432   59960 notify.go:221] Checking for updates...
	I1126 20:06:24.869239   59960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:06:24.872146   59960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:24.874915   59960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:06:24.877742   59960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:06:24.880612   59960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:06:24.883943   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:24.884479   59960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:06:24.917824   59960 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:06:24.917967   59960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:06:24.982581   59960 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-26 20:06:24.973603153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:06:24.982686   59960 docker.go:319] overlay module found
	I1126 20:06:24.986072   59960 out.go:179] * Using the docker driver based on existing profile
	I1126 20:06:24.989065   59960 start.go:309] selected driver: docker
	I1126 20:06:24.989102   59960 start.go:927] validating driver "docker" against &{Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:24.989232   59960 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:06:24.989341   59960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:06:25.048426   59960 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-26 20:06:25.038525674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:06:25.048890   59960 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:06:25.048924   59960 cni.go:84] Creating CNI manager for ""
	I1126 20:06:25.048991   59960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1126 20:06:25.049039   59960 start.go:353] cluster config:
	{Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:25.052236   59960 out.go:179] * Starting "ha-278127" primary control-plane node in "ha-278127" cluster
	I1126 20:06:25.055057   59960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:06:25.058039   59960 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:06:25.061008   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:25.061089   59960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:06:25.061106   59960 cache.go:65] Caching tarball of preloaded images
	I1126 20:06:25.061005   59960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:06:25.061198   59960 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:06:25.061210   59960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:06:25.061353   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:25.080808   59960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:06:25.080831   59960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:06:25.080846   59960 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:06:25.080876   59960 start.go:360] acquireMachinesLock for ha-278127: {Name:mkb106a4eb425a1b9d0e59976741b3f940666d17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:06:25.080933   59960 start.go:364] duration metric: took 35.659µs to acquireMachinesLock for "ha-278127"
	I1126 20:06:25.080951   59960 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:06:25.080956   59960 fix.go:54] fixHost starting: 
	I1126 20:06:25.081217   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:06:25.097737   59960 fix.go:112] recreateIfNeeded on ha-278127: state=Stopped err=<nil>
	W1126 20:06:25.097772   59960 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:06:25.101061   59960 out.go:252] * Restarting existing docker container for "ha-278127" ...
	I1126 20:06:25.101155   59960 cli_runner.go:164] Run: docker start ha-278127
	I1126 20:06:25.385420   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:06:25.411970   59960 kic.go:430] container "ha-278127" state is running.
	I1126 20:06:25.412392   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:25.431941   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:25.432192   59960 machine.go:94] provisionDockerMachine start ...
	I1126 20:06:25.432251   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:25.452939   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:25.453252   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:25.453261   59960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:06:25.454097   59960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44664->127.0.0.1:32828: read: connection reset by peer
	I1126 20:06:28.605461   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127
	
	I1126 20:06:28.605490   59960 ubuntu.go:182] provisioning hostname "ha-278127"
	I1126 20:06:28.605558   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:28.623455   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:28.623769   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:28.623786   59960 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-278127 && echo "ha-278127" | sudo tee /etc/hostname
	I1126 20:06:28.778155   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127
	
	I1126 20:06:28.778256   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:28.794949   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:28.795250   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:28.795271   59960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-278127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-278127/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-278127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:06:28.942212   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:06:28.942238   59960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:06:28.942272   59960 ubuntu.go:190] setting up certificates
	I1126 20:06:28.942281   59960 provision.go:84] configureAuth start
	I1126 20:06:28.942355   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:28.960559   59960 provision.go:143] copyHostCerts
	I1126 20:06:28.960617   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:28.960653   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:06:28.960666   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:28.960744   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:06:28.960844   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:28.960866   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:06:28.960877   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:28.960906   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:06:28.960964   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:28.960985   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:06:28.960993   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:28.961023   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:06:28.961088   59960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.ha-278127 san=[127.0.0.1 192.168.49.2 ha-278127 localhost minikube]
	I1126 20:06:29.153972   59960 provision.go:177] copyRemoteCerts
	I1126 20:06:29.154049   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:06:29.154092   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.171236   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.273352   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1126 20:06:29.273420   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:06:29.290237   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1126 20:06:29.290299   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1126 20:06:29.307794   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1126 20:06:29.307855   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:06:29.325356   59960 provision.go:87] duration metric: took 383.045342ms to configureAuth
	I1126 20:06:29.325387   59960 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:06:29.325626   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:29.325742   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.342790   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:29.343103   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:29.343131   59960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:06:29.721722   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:06:29.721744   59960 machine.go:97] duration metric: took 4.28954331s to provisionDockerMachine
	I1126 20:06:29.721770   59960 start.go:293] postStartSetup for "ha-278127" (driver="docker")
	I1126 20:06:29.721791   59960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:06:29.721855   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:06:29.721907   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.742288   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.845365   59960 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:06:29.848307   59960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:06:29.848344   59960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:06:29.848355   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:06:29.848405   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:06:29.848509   59960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:06:29.848521   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /etc/ssl/certs/41292.pem
	I1126 20:06:29.848614   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:06:29.855777   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:29.872505   59960 start.go:296] duration metric: took 150.71913ms for postStartSetup
	I1126 20:06:29.872582   59960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:06:29.872629   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.889019   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.990934   59960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:06:29.995268   59960 fix.go:56] duration metric: took 4.914304894s for fixHost
	I1126 20:06:29.995338   59960 start.go:83] releasing machines lock for "ha-278127", held for 4.914396494s
	I1126 20:06:29.995443   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:30.012377   59960 ssh_runner.go:195] Run: cat /version.json
	I1126 20:06:30.012396   59960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:06:30.012433   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:30.012448   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:30.031079   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:30.032530   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:30.145909   59960 ssh_runner.go:195] Run: systemctl --version
	I1126 20:06:30.239511   59960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:06:30.276317   59960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:06:30.280821   59960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:06:30.280919   59960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:06:30.288826   59960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:06:30.288852   59960 start.go:496] detecting cgroup driver to use...
	I1126 20:06:30.288908   59960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:06:30.288973   59960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:06:30.304277   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:06:30.316900   59960 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:06:30.316968   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:06:30.332722   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:06:30.345857   59960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:06:30.458910   59960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:06:30.568914   59960 docker.go:234] disabling docker service ...
	I1126 20:06:30.568992   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:06:30.584111   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:06:30.596826   59960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:06:30.712581   59960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:06:30.831709   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:06:30.843921   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:06:30.857895   59960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:06:30.858007   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.867693   59960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:06:30.867809   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.876639   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.885174   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.893801   59960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:06:30.901606   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.910405   59960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.918408   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.927292   59960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:06:30.934726   59960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:06:30.941996   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:31.058637   59960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:06:31.242820   59960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:06:31.242889   59960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:06:31.246945   59960 start.go:564] Will wait 60s for crictl version
	I1126 20:06:31.247023   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:06:31.250523   59960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:06:31.274233   59960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:06:31.274317   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:06:31.302783   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:06:31.335292   59960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:06:31.338152   59960 cli_runner.go:164] Run: docker network inspect ha-278127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:06:31.354467   59960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 20:06:31.358251   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:06:31.368693   59960 kubeadm.go:884] updating cluster {Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:06:31.368839   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:31.368891   59960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:06:31.403727   59960 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:06:31.403752   59960 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:06:31.404010   59960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:06:31.431423   59960 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:06:31.431446   59960 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:06:31.431457   59960 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1126 20:06:31.431560   59960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-278127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:06:31.431642   59960 ssh_runner.go:195] Run: crio config
	I1126 20:06:31.500147   59960 cni.go:84] Creating CNI manager for ""
	I1126 20:06:31.500186   59960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1126 20:06:31.500211   59960 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:06:31.500236   59960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-278127 NodeName:ha-278127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:06:31.500354   59960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-278127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:06:31.500372   59960 kube-vip.go:115] generating kube-vip config ...
	I1126 20:06:31.500428   59960 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1126 20:06:31.512046   59960 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:06:31.512210   59960 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1126 20:06:31.512299   59960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:06:31.519877   59960 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:06:31.519973   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1126 20:06:31.527497   59960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1126 20:06:31.540828   59960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:06:31.553623   59960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1126 20:06:31.566105   59960 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1126 20:06:31.578838   59960 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1126 20:06:31.582461   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:06:31.592186   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:31.707439   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:06:31.722268   59960 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127 for IP: 192.168.49.2
	I1126 20:06:31.722291   59960 certs.go:195] generating shared ca certs ...
	I1126 20:06:31.722307   59960 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:31.722445   59960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:06:31.722497   59960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:06:31.722508   59960 certs.go:257] generating profile certs ...
	I1126 20:06:31.722593   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key
	I1126 20:06:31.722624   59960 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab
	I1126 20:06:31.722643   59960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1126 20:06:32.010576   59960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab ...
	I1126 20:06:32.010610   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab: {Name:mk952cf244227c47330a0f303648b46942398499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.010819   59960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab ...
	I1126 20:06:32.010835   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab: {Name:mk44577b028f8c1bee471863ff089cc458df619d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.010930   59960 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt
	I1126 20:06:32.011078   59960 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key
	I1126 20:06:32.011225   59960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key
	I1126 20:06:32.011244   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1126 20:06:32.011263   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1126 20:06:32.011280   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1126 20:06:32.011297   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1126 20:06:32.011315   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1126 20:06:32.011331   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1126 20:06:32.011348   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1126 20:06:32.011362   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1126 20:06:32.011414   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:06:32.011456   59960 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:06:32.011469   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:06:32.011501   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:06:32.011530   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:06:32.011558   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:06:32.011608   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:32.011640   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.011656   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.011666   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem -> /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.012331   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:06:32.032881   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:06:32.054562   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:06:32.072828   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:06:32.091195   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:06:32.109160   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:06:32.126721   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:06:32.143729   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:06:32.162210   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:06:32.179022   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:06:32.196402   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:06:32.213770   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:06:32.227414   59960 ssh_runner.go:195] Run: openssl version
	I1126 20:06:32.233654   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:06:32.243718   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.247376   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.247448   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.289532   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:06:32.297668   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:06:32.306080   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.309793   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.309880   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.353652   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:06:32.364544   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:06:32.373430   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.381651   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.381803   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.434961   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:06:32.448704   59960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:06:32.454552   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:06:32.518905   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:06:32.599420   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:06:32.673604   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:06:32.734602   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:06:32.794948   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:06:32.842245   59960 kubeadm.go:401] StartCluster: {Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:32.842417   59960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:06:32.842512   59960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:06:32.887488   59960 cri.go:89] found id: "f5647f1652cc11a195a49a98906391e791c3136916a5e3c249907585088fad42"
	I1126 20:06:32.887548   59960 cri.go:89] found id: "1ed2c42e7047cc402ab04fdadafa16acc5208b12eede0475826c97d34c9a071f"
	I1126 20:06:32.887577   59960 cri.go:89] found id: "040a8549001808f2d3fce3d4cf9f8dff272706173960c5e8004af8b1ea042e80"
	I1126 20:06:32.887595   59960 cri.go:89] found id: "106da3c0ad4fa03ae491f571375cda1a123fe52e6f7ef39170a84c273267c713"
	I1126 20:06:32.887614   59960 cri.go:89] found id: "cdc1651fea8f10bd665928dcc7bb174b74385eb06e911da9629df17c0d9d29e8"
	I1126 20:06:32.887650   59960 cri.go:89] found id: ""
	I1126 20:06:32.887728   59960 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:06:32.910884   59960 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:06:32Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:06:32.911021   59960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:06:32.933474   59960 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:06:32.933554   59960 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:06:32.933631   59960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:06:32.956246   59960 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:06:32.956760   59960 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-278127" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:32.956919   59960 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-2326/kubeconfig needs updating (will repair): [kubeconfig missing "ha-278127" cluster setting kubeconfig missing "ha-278127" context setting]
	I1126 20:06:32.957299   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.957946   59960 kapi.go:59] client config for ha-278127: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:06:32.958772   59960 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1126 20:06:32.958857   59960 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1126 20:06:32.958878   59960 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1126 20:06:32.958921   59960 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1126 20:06:32.958940   59960 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1126 20:06:32.958837   59960 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1126 20:06:32.959354   59960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:06:32.974056   59960 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1126 20:06:32.974125   59960 kubeadm.go:602] duration metric: took 40.551528ms to restartPrimaryControlPlane
	I1126 20:06:32.974150   59960 kubeadm.go:403] duration metric: took 131.91251ms to StartCluster
	I1126 20:06:32.974180   59960 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.974282   59960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:32.974978   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.975243   59960 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:06:32.975297   59960 start.go:242] waiting for startup goroutines ...
	I1126 20:06:32.975325   59960 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:06:32.975918   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:32.981231   59960 out.go:179] * Enabled addons: 
	I1126 20:06:32.984100   59960 addons.go:530] duration metric: took 8.777007ms for enable addons: enabled=[]
	I1126 20:06:32.984180   59960 start.go:247] waiting for cluster config update ...
	I1126 20:06:32.984203   59960 start.go:256] writing updated cluster config ...
	I1126 20:06:32.987492   59960 out.go:203] 
	I1126 20:06:32.990613   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:32.990800   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:32.994017   59960 out.go:179] * Starting "ha-278127-m02" control-plane node in "ha-278127" cluster
	I1126 20:06:32.996802   59960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:06:32.999792   59960 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:06:33.002700   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:33.002740   59960 cache.go:65] Caching tarball of preloaded images
	I1126 20:06:33.002860   59960 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:06:33.002893   59960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:06:33.003031   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:33.003254   59960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:06:33.039303   59960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:06:33.039323   59960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:06:33.039336   59960 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:06:33.039360   59960 start.go:360] acquireMachinesLock for ha-278127-m02: {Name:mkfa715e07e067116cf6c4854164186af5a39436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:06:33.039417   59960 start.go:364] duration metric: took 41.518µs to acquireMachinesLock for "ha-278127-m02"
	I1126 20:06:33.039439   59960 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:06:33.039445   59960 fix.go:54] fixHost starting: m02
	I1126 20:06:33.039721   59960 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:06:33.071417   59960 fix.go:112] recreateIfNeeded on ha-278127-m02: state=Stopped err=<nil>
	W1126 20:06:33.071449   59960 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:06:33.074580   59960 out.go:252] * Restarting existing docker container for "ha-278127-m02" ...
	I1126 20:06:33.074664   59960 cli_runner.go:164] Run: docker start ha-278127-m02
	I1126 20:06:33.452368   59960 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:06:33.483474   59960 kic.go:430] container "ha-278127-m02" state is running.
	I1126 20:06:33.483869   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:33.512602   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:33.512851   59960 machine.go:94] provisionDockerMachine start ...
	I1126 20:06:33.512917   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:33.539611   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:33.539907   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:33.539915   59960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:06:33.540557   59960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35216->127.0.0.1:32833: read: connection reset by peer
	I1126 20:06:36.755151   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127-m02
	
	I1126 20:06:36.755173   59960 ubuntu.go:182] provisioning hostname "ha-278127-m02"
	I1126 20:06:36.755238   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:36.783610   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:36.783923   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:36.783950   59960 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-278127-m02 && echo "ha-278127-m02" | sudo tee /etc/hostname
	I1126 20:06:37.026368   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127-m02
	
	I1126 20:06:37.026488   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:37.056257   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:37.056574   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:37.056592   59960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-278127-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-278127-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-278127-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:06:37.278605   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:06:37.278692   59960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:06:37.278724   59960 ubuntu.go:190] setting up certificates
	I1126 20:06:37.278764   59960 provision.go:84] configureAuth start
	I1126 20:06:37.278849   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:37.306165   59960 provision.go:143] copyHostCerts
	I1126 20:06:37.306207   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:37.306246   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:06:37.306253   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:37.306332   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:06:37.306421   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:37.306441   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:06:37.306445   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:37.306474   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:06:37.306512   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:37.306528   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:06:37.306532   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:37.306553   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:06:37.306602   59960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.ha-278127-m02 san=[127.0.0.1 192.168.49.3 ha-278127-m02 localhost minikube]
	I1126 20:06:37.781886   59960 provision.go:177] copyRemoteCerts
	I1126 20:06:37.782050   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:06:37.782113   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:37.799978   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:37.920744   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1126 20:06:37.920800   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:06:37.946353   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1126 20:06:37.946424   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 20:06:37.990628   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1126 20:06:37.990734   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:06:38.022932   59960 provision.go:87] duration metric: took 744.14174ms to configureAuth
	I1126 20:06:38.022999   59960 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:06:38.023281   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:38.023419   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:38.055902   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:38.056219   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:38.056232   59960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:06:39.163004   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:06:39.163066   59960 machine.go:97] duration metric: took 5.650194842s to provisionDockerMachine
	I1126 20:06:39.163087   59960 start.go:293] postStartSetup for "ha-278127-m02" (driver="docker")
	I1126 20:06:39.163098   59960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:06:39.163204   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:06:39.163258   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.194111   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.327619   59960 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:06:39.331483   59960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:06:39.331507   59960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:06:39.331518   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:06:39.331574   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:06:39.331649   59960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:06:39.331655   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /etc/ssl/certs/41292.pem
	I1126 20:06:39.331756   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:06:39.344886   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:39.377797   59960 start.go:296] duration metric: took 214.695598ms for postStartSetup
	I1126 20:06:39.377880   59960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:06:39.377991   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.402878   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.525023   59960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:06:39.531527   59960 fix.go:56] duration metric: took 6.492076268s for fixHost
	I1126 20:06:39.531551   59960 start.go:83] releasing machines lock for "ha-278127-m02", held for 6.492125467s
	I1126 20:06:39.531622   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:39.571062   59960 out.go:179] * Found network options:
	I1126 20:06:39.574101   59960 out.go:179]   - NO_PROXY=192.168.49.2
	W1126 20:06:39.577135   59960 proxy.go:120] fail to check proxy env: Error ip not in block
	W1126 20:06:39.577189   59960 proxy.go:120] fail to check proxy env: Error ip not in block
	I1126 20:06:39.577283   59960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:06:39.577298   59960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:06:39.577325   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.577353   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.610149   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.618182   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.847910   59960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:06:39.986067   59960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:06:39.986218   59960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:06:40.010567   59960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:06:40.010651   59960 start.go:496] detecting cgroup driver to use...
	I1126 20:06:40.010701   59960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:06:40.010777   59960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:06:40.066499   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:06:40.113187   59960 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:06:40.113357   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:06:40.138505   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:06:40.165558   59960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:06:40.434812   59960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:06:40.667360   59960 docker.go:234] disabling docker service ...
	I1126 20:06:40.667485   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:06:40.689020   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:06:40.712251   59960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:06:41.062262   59960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:06:41.446879   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:06:41.479018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:06:41.522736   59960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:06:41.522836   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.550554   59960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:06:41.550640   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.568877   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.605965   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.634535   59960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:06:41.647439   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.679616   59960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.700895   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.724575   59960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:06:41.743621   59960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:06:41.761053   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:42.179518   59960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:08:12.654700   59960 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.475140858s)
	I1126 20:08:12.654725   59960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:08:12.654777   59960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:08:12.658561   59960 start.go:564] Will wait 60s for crictl version
	I1126 20:08:12.658629   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:08:12.662122   59960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:08:12.694230   59960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:08:12.694320   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:08:12.723516   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:08:12.752895   59960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:08:12.755800   59960 out.go:179]   - env NO_PROXY=192.168.49.2
	I1126 20:08:12.758681   59960 cli_runner.go:164] Run: docker network inspect ha-278127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:08:12.774831   59960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 20:08:12.778729   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:08:12.788193   59960 mustload.go:66] Loading cluster: ha-278127
	I1126 20:08:12.788437   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:08:12.788732   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:08:12.805367   59960 host.go:66] Checking if "ha-278127" exists ...
	I1126 20:08:12.805673   59960 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127 for IP: 192.168.49.3
	I1126 20:08:12.805688   59960 certs.go:195] generating shared ca certs ...
	I1126 20:08:12.805703   59960 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:08:12.805829   59960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:08:12.805875   59960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:08:12.805885   59960 certs.go:257] generating profile certs ...
	I1126 20:08:12.806061   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key
	I1126 20:08:12.806134   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.28ad082f
	I1126 20:08:12.806177   59960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key
	I1126 20:08:12.806189   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1126 20:08:12.806203   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1126 20:08:12.806214   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1126 20:08:12.806227   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1126 20:08:12.806238   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1126 20:08:12.806249   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1126 20:08:12.806265   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1126 20:08:12.806276   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1126 20:08:12.806330   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:08:12.806364   59960 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:08:12.806376   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:08:12.806404   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:08:12.806431   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:08:12.806458   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:08:12.806505   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:08:12.806543   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:12.806557   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem -> /usr/share/ca-certificates/4129.pem
	I1126 20:08:12.806568   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /usr/share/ca-certificates/41292.pem
	I1126 20:08:12.806631   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:08:12.824408   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:08:12.926228   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1126 20:08:12.930801   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1126 20:08:12.939401   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1126 20:08:12.947934   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1126 20:08:12.960335   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1126 20:08:12.964526   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1126 20:08:12.973104   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1126 20:08:12.978204   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1126 20:08:12.987576   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1126 20:08:12.991901   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1126 20:08:13.001289   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1126 20:08:13.006200   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1126 20:08:13.014443   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:08:13.039341   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:08:13.063520   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:08:13.085219   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:08:13.103037   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:08:13.123095   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:08:13.140681   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:08:13.160781   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:08:13.180406   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:08:13.200475   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:08:13.221024   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:08:13.239900   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1126 20:08:13.254738   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1126 20:08:13.269631   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1126 20:08:13.285317   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1126 20:08:13.300359   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1126 20:08:13.320893   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1126 20:08:13.340300   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1126 20:08:13.361527   59960 ssh_runner.go:195] Run: openssl version
	I1126 20:08:13.368555   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:08:13.377244   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.381511   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.381624   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.427936   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:08:13.437023   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:08:13.445274   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.449571   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.449682   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.496315   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:08:13.504808   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:08:13.513181   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.517313   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.517396   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.579337   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:08:13.588179   59960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:08:13.593330   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:08:13.645107   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:08:13.691020   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:08:13.735436   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:08:13.780762   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:08:13.830095   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:08:13.873290   59960 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1126 20:08:13.873415   59960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-278127-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:08:13.873445   59960 kube-vip.go:115] generating kube-vip config ...
	I1126 20:08:13.873508   59960 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1126 20:08:13.885513   59960 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:08:13.885577   59960 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1126 20:08:13.885657   59960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:08:13.893550   59960 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:08:13.893628   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1126 20:08:13.901912   59960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1126 20:08:13.916015   59960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:08:13.934936   59960 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1126 20:08:13.979363   59960 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1126 20:08:13.991396   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:08:14.018397   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:08:14.385132   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:08:14.402828   59960 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:08:14.403147   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:08:14.408967   59960 out.go:179] * Verifying Kubernetes components...
	I1126 20:08:14.411916   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:08:14.659853   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:08:14.678979   59960 kapi.go:59] client config for ha-278127: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1126 20:08:14.679061   59960 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1126 20:08:14.679322   59960 node_ready.go:35] waiting up to 6m0s for node "ha-278127-m02" to be "Ready" ...
	I1126 20:08:15.269402   59960 node_ready.go:49] node "ha-278127-m02" is "Ready"
	I1126 20:08:15.269438   59960 node_ready.go:38] duration metric: took 590.083677ms for node "ha-278127-m02" to be "Ready" ...
	I1126 20:08:15.269450   59960 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:08:15.269508   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:15.770378   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:16.271005   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:16.769624   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:17.269646   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:17.770292   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:18.270233   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:18.770225   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:19.269626   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:19.770251   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:20.270592   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:20.769691   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:21.269742   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:21.769575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:22.269640   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:22.770094   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:23.269745   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:23.770093   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:24.269839   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:24.770626   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:25.270510   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:25.770352   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:26.270238   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:26.770199   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:27.270553   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:27.770570   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:28.269631   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:28.770575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:29.269663   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:29.770438   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:30.269733   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:30.769570   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:31.269688   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:31.770556   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:32.270505   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:32.770152   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:33.269716   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:33.769765   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:34.269659   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:34.769641   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:35.269866   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:35.770030   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:36.270158   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:36.770014   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:37.270234   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:37.769610   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:38.270567   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:38.770558   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:39.269653   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:39.769895   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:40.270407   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:40.769781   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:41.270338   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:41.770411   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:42.269686   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:42.770028   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:43.269580   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:43.769636   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:44.269684   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:44.769627   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:45.272055   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:45.770418   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:46.269657   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:46.770575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:47.270036   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:47.770377   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:48.270502   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:48.770450   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:49.269719   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:49.770449   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:50.269903   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:50.769675   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:51.270539   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:51.770618   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:52.270336   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:52.770354   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:53.270340   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:53.769901   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:54.270054   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:54.769747   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:55.270283   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:55.770525   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:56.269881   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:56.769908   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:57.269834   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:57.769631   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:58.270414   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:58.770529   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:59.269820   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:59.770577   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:00.269749   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:00.770275   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:01.270165   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:01.769910   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:02.269673   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:02.770492   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:03.270339   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:03.769642   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:04.269668   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:04.770177   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:05.270062   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:05.770571   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:06.270286   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:06.770466   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:07.269878   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:07.770593   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:08.270292   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:08.770068   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:09.269767   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:09.769619   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:10.270146   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:10.769659   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:11.270311   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:11.770596   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:12.269893   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:12.769649   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:13.270341   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:13.770530   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:14.269596   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:14.769532   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:14.769644   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:14.805181   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:14.805204   59960 cri.go:89] found id: ""
	I1126 20:09:14.805213   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:14.805269   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.809129   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:14.809206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:14.835451   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:14.835475   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:14.835480   59960 cri.go:89] found id: ""
	I1126 20:09:14.835487   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:14.835543   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.839249   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.842501   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:14.842574   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:14.867922   59960 cri.go:89] found id: ""
	I1126 20:09:14.867948   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.867957   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:14.867963   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:14.868022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:14.893599   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:14.893625   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:14.893630   59960 cri.go:89] found id: ""
	I1126 20:09:14.893638   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:14.893730   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.897540   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.901438   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:14.901540   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:14.929244   59960 cri.go:89] found id: ""
	I1126 20:09:14.929268   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.929277   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:14.929284   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:14.929340   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:14.956242   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:14.956264   59960 cri.go:89] found id: ""
	I1126 20:09:14.956272   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:14.956326   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.960197   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:14.960271   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:14.985332   59960 cri.go:89] found id: ""
	I1126 20:09:14.985407   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.985428   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:14.985455   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:14.985495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:15.015412   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:15.015491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:15.446082   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:15.438231    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.438877    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440458    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440891    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.442380    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:15.438231    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.438877    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440458    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440891    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.442380    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:15.446107   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:15.446122   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:15.474426   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:15.474452   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:15.514330   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:15.514364   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:15.582633   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:15.582662   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:15.636475   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:15.636508   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:15.718181   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:15.718215   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:15.814217   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:15.814253   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:15.826793   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:15.826823   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:15.854520   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:15.854550   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.382038   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:18.401602   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:18.401678   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:18.435808   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:18.435831   59960 cri.go:89] found id: ""
	I1126 20:09:18.435839   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:18.435907   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.439686   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:18.439801   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:18.476740   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:18.476764   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:18.476770   59960 cri.go:89] found id: ""
	I1126 20:09:18.476787   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:18.476889   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.480732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.484682   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:18.484783   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:18.511910   59960 cri.go:89] found id: ""
	I1126 20:09:18.511974   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.511989   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:18.511996   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:18.512055   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:18.547921   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:18.547988   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:18.548006   59960 cri.go:89] found id: ""
	I1126 20:09:18.548014   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:18.548071   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.552076   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.556982   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:18.557066   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:18.587286   59960 cri.go:89] found id: ""
	I1126 20:09:18.587313   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.587333   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:18.587340   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:18.587401   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:18.620541   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.620559   59960 cri.go:89] found id: ""
	I1126 20:09:18.620567   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:18.620626   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.624723   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:18.624796   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:18.653037   59960 cri.go:89] found id: ""
	I1126 20:09:18.653060   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.653068   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:18.653077   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:18.653090   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.684308   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:18.684335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:18.776764   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:18.776798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:18.865581   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:18.856655    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858014    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858939    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.859710    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.861248    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:18.856655    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858014    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858939    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.859710    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.861248    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:18.865603   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:18.865616   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:18.909234   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:18.909270   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:18.960436   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:18.960477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:18.990735   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:18.990766   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:19.069643   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:19.069722   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:19.104112   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:19.104137   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:19.118175   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:19.118204   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:19.148200   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:19.148229   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:21.687827   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:21.698536   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:21.698621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:21.730147   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:21.730171   59960 cri.go:89] found id: ""
	I1126 20:09:21.730180   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:21.730235   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.735922   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:21.736012   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:21.763452   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:21.763481   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:21.763486   59960 cri.go:89] found id: ""
	I1126 20:09:21.763494   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:21.763551   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.767451   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.771041   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:21.771140   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:21.803663   59960 cri.go:89] found id: ""
	I1126 20:09:21.803688   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.803697   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:21.803703   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:21.803767   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:21.832470   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:21.832496   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:21.832501   59960 cri.go:89] found id: ""
	I1126 20:09:21.832510   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:21.832567   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.836410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.840076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:21.840157   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:21.866968   59960 cri.go:89] found id: ""
	I1126 20:09:21.866994   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.867004   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:21.867011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:21.867093   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:21.892977   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:21.893000   59960 cri.go:89] found id: ""
	I1126 20:09:21.893008   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:21.893083   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.896906   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:21.897019   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:21.923720   59960 cri.go:89] found id: ""
	I1126 20:09:21.923744   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.923753   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:21.923762   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:21.923793   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:22.011751   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:22.003342    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.003880    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.005519    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.006189    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.007784    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:22.003342    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.003880    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.005519    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.006189    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.007784    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:22.011856   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:22.011890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:22.042091   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:22.042121   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:22.079857   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:22.079886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:22.179933   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:22.179973   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:22.207540   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:22.207568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:22.263434   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:22.263465   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:22.313145   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:22.313180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:22.365142   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:22.365177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:22.446886   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:22.446920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:22.483927   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:22.483961   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:24.996823   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:25.007913   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:25.007987   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:25.044777   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:25.044801   59960 cri.go:89] found id: ""
	I1126 20:09:25.044810   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:25.044870   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.048843   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:25.048923   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:25.083120   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:25.083187   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:25.083197   59960 cri.go:89] found id: ""
	I1126 20:09:25.083205   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:25.083271   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.086865   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.090526   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:25.090596   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:25.118710   59960 cri.go:89] found id: ""
	I1126 20:09:25.118735   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.118745   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:25.118752   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:25.118809   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:25.145818   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:25.145843   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:25.145850   59960 cri.go:89] found id: ""
	I1126 20:09:25.145857   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:25.145956   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.154268   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.159267   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:25.159348   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:25.185977   59960 cri.go:89] found id: ""
	I1126 20:09:25.186002   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.186011   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:25.186017   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:25.186072   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:25.213727   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:25.213751   59960 cri.go:89] found id: ""
	I1126 20:09:25.213760   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:25.213826   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.217850   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:25.217960   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:25.246743   59960 cri.go:89] found id: ""
	I1126 20:09:25.246769   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.246779   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:25.246788   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:25.246800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:25.321227   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:25.312798    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.313456    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315126    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315598    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.317138    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:25.312798    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.313456    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315126    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315598    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.317138    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:25.321251   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:25.321288   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:25.346983   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:25.347011   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:25.407991   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:25.408027   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:25.439857   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:25.439886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:25.467227   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:25.467252   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:25.549334   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:25.549371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:25.590791   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:25.590821   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:25.636096   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:25.636130   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:25.668287   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:25.668314   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:25.765804   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:25.765838   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:28.279160   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:28.290077   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:28.290149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:28.320697   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:28.320720   59960 cri.go:89] found id: ""
	I1126 20:09:28.320729   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:28.320786   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.324391   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:28.324466   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:28.351072   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:28.351094   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:28.351099   59960 cri.go:89] found id: ""
	I1126 20:09:28.351106   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:28.351161   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.355739   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.359260   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:28.359346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:28.386343   59960 cri.go:89] found id: ""
	I1126 20:09:28.386370   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.386383   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:28.386390   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:28.386457   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:28.413613   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:28.413635   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:28.413641   59960 cri.go:89] found id: ""
	I1126 20:09:28.413648   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:28.413701   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.417403   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.420731   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:28.420810   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:28.446127   59960 cri.go:89] found id: ""
	I1126 20:09:28.446202   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.446225   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:28.446245   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:28.446337   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:28.471432   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:28.471454   59960 cri.go:89] found id: ""
	I1126 20:09:28.471462   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:28.471545   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.475058   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:28.475141   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:28.502515   59960 cri.go:89] found id: ""
	I1126 20:09:28.502539   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.502549   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:28.502559   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:28.502570   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:28.514608   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:28.514637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:28.557861   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:28.557890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:28.627880   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:28.627917   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:28.659730   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:28.659757   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:28.725495   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:28.717349    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.718072    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.719611    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.720154    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.722097    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:28.717349    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.718072    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.719611    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.720154    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.722097    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:28.725519   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:28.725532   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:28.763157   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:28.763187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:28.828543   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:28.828573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:28.855674   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:28.855707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:28.888296   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:28.888323   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:28.966101   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:28.966135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:31.560965   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:31.571673   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:31.571744   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:31.601161   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:31.601182   59960 cri.go:89] found id: ""
	I1126 20:09:31.601190   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:31.601269   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.605397   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:31.605476   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:31.631813   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:31.631835   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:31.631841   59960 cri.go:89] found id: ""
	I1126 20:09:31.631848   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:31.631904   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.635710   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.639546   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:31.639621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:31.674540   59960 cri.go:89] found id: ""
	I1126 20:09:31.674569   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.674578   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:31.674585   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:31.674643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:31.705780   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:31.705799   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:31.705803   59960 cri.go:89] found id: ""
	I1126 20:09:31.705810   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:31.705865   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.709862   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.713500   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:31.713591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:31.739394   59960 cri.go:89] found id: ""
	I1126 20:09:31.739419   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.739429   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:31.739435   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:31.739492   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:31.765811   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:31.765834   59960 cri.go:89] found id: ""
	I1126 20:09:31.765842   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:31.765960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.769463   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:31.769554   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:31.802081   59960 cri.go:89] found id: ""
	I1126 20:09:31.802107   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.802116   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:31.802153   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:31.802172   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:31.849273   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:31.849308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:31.902662   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:31.902697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:31.990675   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:31.990710   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:32.022637   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:32.022667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:32.100797   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:32.092180    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.093036    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.094703    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.095415    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.097142    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:32.092180    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.093036    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.094703    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.095415    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.097142    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:32.100820   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:32.100833   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:32.146149   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:32.146184   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:32.172943   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:32.172970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:32.199037   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:32.199063   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:32.306507   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:32.306540   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:32.319193   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:32.319221   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:34.849302   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:34.860158   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:34.860250   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:34.887094   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:34.887113   59960 cri.go:89] found id: ""
	I1126 20:09:34.887121   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:34.887177   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.890890   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:34.890964   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:34.921149   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:34.921177   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:34.921182   59960 cri.go:89] found id: ""
	I1126 20:09:34.921189   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:34.921243   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.924938   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.928493   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:34.928569   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:34.954052   59960 cri.go:89] found id: ""
	I1126 20:09:34.954078   59960 logs.go:282] 0 containers: []
	W1126 20:09:34.954087   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:34.954093   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:34.954206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:34.985031   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:34.985054   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:34.985059   59960 cri.go:89] found id: ""
	I1126 20:09:34.985067   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:34.985121   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.989050   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.992852   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:34.992934   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:35.019287   59960 cri.go:89] found id: ""
	I1126 20:09:35.019314   59960 logs.go:282] 0 containers: []
	W1126 20:09:35.019323   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:35.019330   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:35.019393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:35.049190   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:35.049217   59960 cri.go:89] found id: ""
	I1126 20:09:35.049237   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:35.049313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:35.053627   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:35.053713   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:35.091326   59960 cri.go:89] found id: ""
	I1126 20:09:35.091394   59960 logs.go:282] 0 containers: []
	W1126 20:09:35.091420   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:35.091440   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:35.091476   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:35.188523   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:35.188560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:35.220725   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:35.220755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:35.250614   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:35.250643   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:35.289963   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:35.289995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:35.303153   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:35.303180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:35.375929   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:35.367382    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.368117    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.369869    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.370618    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.372228    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:35.367382    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.368117    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.369869    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.370618    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.372228    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:35.375952   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:35.375968   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:35.403037   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:35.403066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:35.445367   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:35.445402   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:35.491101   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:35.491135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:35.561489   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:35.561524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:38.150634   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:38.161275   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:38.161346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:38.189434   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:38.189461   59960 cri.go:89] found id: ""
	I1126 20:09:38.189469   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:38.189530   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.195206   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:38.195288   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:38.223137   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:38.223160   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:38.223166   59960 cri.go:89] found id: ""
	I1126 20:09:38.223173   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:38.223227   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.226977   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.230547   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:38.230624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:38.255698   59960 cri.go:89] found id: ""
	I1126 20:09:38.255723   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.255732   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:38.255742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:38.255800   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:38.285059   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:38.285082   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:38.285087   59960 cri.go:89] found id: ""
	I1126 20:09:38.285097   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:38.285151   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.288799   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.292713   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:38.292786   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:38.318862   59960 cri.go:89] found id: ""
	I1126 20:09:38.318889   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.318898   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:38.318905   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:38.318963   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:38.346973   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:38.346996   59960 cri.go:89] found id: ""
	I1126 20:09:38.347005   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:38.347057   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.350729   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:38.350856   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:38.378801   59960 cri.go:89] found id: ""
	I1126 20:09:38.378827   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.378836   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:38.378845   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:38.378915   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:38.390980   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:38.391009   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:38.422522   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:38.422550   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:38.469058   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:38.469133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:38.523109   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:38.523182   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:38.559691   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:38.559716   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:38.646468   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:38.646504   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:38.751509   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:38.751551   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:38.836492   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:38.827693    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.828759    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.829560    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.830636    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.831318    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:38.827693    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.828759    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.829560    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.830636    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.831318    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:38.836516   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:38.836528   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:38.876587   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:38.876623   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:38.910948   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:38.910987   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:41.443533   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:41.454798   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:41.454873   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:41.485670   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:41.485699   59960 cri.go:89] found id: ""
	I1126 20:09:41.485707   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:41.485761   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.489619   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:41.489690   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:41.525686   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:41.525710   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:41.525714   59960 cri.go:89] found id: ""
	I1126 20:09:41.525722   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:41.525777   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.536491   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.541670   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:41.541797   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:41.570295   59960 cri.go:89] found id: ""
	I1126 20:09:41.570319   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.570327   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:41.570334   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:41.570393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:41.598145   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:41.598169   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:41.598175   59960 cri.go:89] found id: ""
	I1126 20:09:41.598182   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:41.598258   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.602230   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.606445   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:41.606530   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:41.636614   59960 cri.go:89] found id: ""
	I1126 20:09:41.636637   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.636646   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:41.636652   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:41.636707   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:41.663292   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:41.663315   59960 cri.go:89] found id: ""
	I1126 20:09:41.663327   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:41.663382   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.667194   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:41.667277   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:41.696056   59960 cri.go:89] found id: ""
	I1126 20:09:41.696081   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.696090   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:41.696099   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:41.696110   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:41.794427   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:41.794463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:41.822463   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:41.822493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:41.871566   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:41.871599   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:41.916725   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:41.916759   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:41.950381   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:41.950410   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:41.982658   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:41.982692   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:41.996639   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:41.996672   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:42.087350   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:42.079184    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.079744    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081320    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081972    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.083647    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:42.079184    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.079744    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081320    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081972    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.083647    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:42.087369   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:42.087384   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:42.175919   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:42.176012   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:42.281379   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:42.281406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:44.882212   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:44.893873   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:44.893969   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:44.923663   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:44.923683   59960 cri.go:89] found id: ""
	I1126 20:09:44.923691   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:44.923744   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.927892   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:44.927959   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:44.958403   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:44.958423   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:44.958427   59960 cri.go:89] found id: ""
	I1126 20:09:44.958434   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:44.958486   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.962367   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.966913   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:44.966985   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:45.000482   59960 cri.go:89] found id: ""
	I1126 20:09:45.000503   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.000511   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:45.000517   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:45.000572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:45.031381   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:45.031401   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:45.031406   59960 cri.go:89] found id: ""
	I1126 20:09:45.031414   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:45.031471   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.036637   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.042551   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:45.042723   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:45.086906   59960 cri.go:89] found id: ""
	I1126 20:09:45.086987   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.087026   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:45.087050   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:45.087153   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:45.137504   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:45.137578   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:45.137598   59960 cri.go:89] found id: ""
	I1126 20:09:45.137621   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:45.137715   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.143678   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.149235   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:45.149438   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:45.196979   59960 cri.go:89] found id: ""
	I1126 20:09:45.197063   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.197089   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:45.197146   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:45.197191   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:45.267194   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:45.267280   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:45.386434   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:45.386524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:45.468233   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:45.459943    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.460742    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462336    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462624    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.464644    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:45.459943    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.460742    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462336    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462624    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.464644    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:45.468305   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:45.468342   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:45.541622   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:45.541649   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:45.613664   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:45.613695   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:45.641765   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:45.641794   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:45.702809   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:45.702837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:45.807019   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:45.807056   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:45.820258   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:45.820289   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:45.867345   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:45.867376   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:45.921560   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:45.921596   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:48.454091   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:48.464670   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:48.464755   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:48.493056   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:48.493081   59960 cri.go:89] found id: ""
	I1126 20:09:48.493089   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:48.493144   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.496943   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:48.497007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:48.524995   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:48.525020   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:48.525025   59960 cri.go:89] found id: ""
	I1126 20:09:48.525032   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:48.525085   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.528726   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.532247   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:48.532317   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:48.557862   59960 cri.go:89] found id: ""
	I1126 20:09:48.557887   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.557896   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:48.557902   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:48.557988   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:48.587744   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:48.587765   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:48.587770   59960 cri.go:89] found id: ""
	I1126 20:09:48.587777   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:48.587832   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.591388   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.594875   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:48.594985   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:48.627277   59960 cri.go:89] found id: ""
	I1126 20:09:48.627298   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.627313   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:48.627352   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:48.627433   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:48.664063   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:48.664088   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:48.664102   59960 cri.go:89] found id: ""
	I1126 20:09:48.664110   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:48.664222   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.668219   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.671608   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:48.671680   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:48.700294   59960 cri.go:89] found id: ""
	I1126 20:09:48.700322   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.700331   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:48.700340   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:48.700351   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:48.793887   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:48.793974   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:48.807445   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:48.807472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:48.881133   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:48.873596    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.874156    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.875737    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.876232    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.877299    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:48.873596    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.874156    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.875737    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.876232    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.877299    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:48.881155   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:48.881167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:48.926338   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:48.926370   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:48.980929   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:48.980964   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:49.008703   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:49.008729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:49.035020   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:49.035134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:49.075209   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:49.075239   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:49.102778   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:49.102808   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:49.148209   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:49.148243   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:49.175449   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:49.175477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:51.750461   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:51.761173   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:51.761247   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:51.792174   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:51.792200   59960 cri.go:89] found id: ""
	I1126 20:09:51.792207   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:51.792272   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.796194   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:51.796266   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:51.826309   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:51.826333   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:51.826339   59960 cri.go:89] found id: ""
	I1126 20:09:51.826346   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:51.826408   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.830049   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.833626   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:51.833703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:51.864668   59960 cri.go:89] found id: ""
	I1126 20:09:51.864693   59960 logs.go:282] 0 containers: []
	W1126 20:09:51.864702   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:51.864709   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:51.864770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:51.902154   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:51.902178   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:51.902184   59960 cri.go:89] found id: ""
	I1126 20:09:51.902191   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:51.902244   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.906099   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.909550   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:51.909622   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:51.940956   59960 cri.go:89] found id: ""
	I1126 20:09:51.940984   59960 logs.go:282] 0 containers: []
	W1126 20:09:51.940993   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:51.941000   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:51.941057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:51.967086   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:51.967112   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:51.967117   59960 cri.go:89] found id: ""
	I1126 20:09:51.967125   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:51.967206   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.970992   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.974344   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:51.974463   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:52.006654   59960 cri.go:89] found id: ""
	I1126 20:09:52.006675   59960 logs.go:282] 0 containers: []
	W1126 20:09:52.006684   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:52.006693   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:52.006705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:52.033587   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:52.033621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:52.062777   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:52.062810   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:52.136250   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:52.127112    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.127989    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.129548    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.130437    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.132317    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:52.127112    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.127989    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.129548    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.130437    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.132317    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:52.136279   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:52.136292   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:52.165716   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:52.165792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:52.210120   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:52.210157   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:52.266182   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:52.266228   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:52.296704   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:52.296732   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:52.373394   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:52.373432   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:52.409405   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:52.409436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:52.508717   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:52.508755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:52.520510   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:52.520577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.069988   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:55.081385   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:55.081477   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:55.109272   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:55.109297   59960 cri.go:89] found id: ""
	I1126 20:09:55.109306   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:55.109393   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.113332   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:55.113409   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:55.144644   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.144728   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:55.144749   59960 cri.go:89] found id: ""
	I1126 20:09:55.144782   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:55.144860   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.148962   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.153598   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:55.153724   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:55.180168   59960 cri.go:89] found id: ""
	I1126 20:09:55.180235   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.180274   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:55.180302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:55.180378   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:55.207578   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:55.207606   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:55.207611   59960 cri.go:89] found id: ""
	I1126 20:09:55.207621   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:55.207698   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.211665   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.215295   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:55.215371   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:55.243201   59960 cri.go:89] found id: ""
	I1126 20:09:55.243228   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.243237   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:55.243243   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:55.243299   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:55.273345   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:55.273370   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:55.273375   59960 cri.go:89] found id: ""
	I1126 20:09:55.273382   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:55.273434   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.277156   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.280557   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:55.280629   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:55.306973   59960 cri.go:89] found id: ""
	I1126 20:09:55.307037   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.307052   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:55.307061   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:55.307072   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:55.405440   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:55.405474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:55.418598   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:55.418628   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:55.487261   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:55.479261    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.479915    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481393    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481846    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.483618    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:55.479261    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.479915    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481393    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481846    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.483618    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:55.487286   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:55.487299   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:55.531555   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:55.531626   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:55.601020   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:55.601057   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:55.632319   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:55.632347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:55.660851   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:55.660881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:55.742963   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:55.742998   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:55.773047   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:55.773076   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.826960   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:55.826991   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:55.855917   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:55.855944   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:58.399772   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:58.415975   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:58.416043   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:58.442760   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:58.442782   59960 cri.go:89] found id: ""
	I1126 20:09:58.442792   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:58.442850   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.446527   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:58.446620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:58.476049   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:58.476071   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:58.476076   59960 cri.go:89] found id: ""
	I1126 20:09:58.476084   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:58.476141   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.480019   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.483716   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:58.483799   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:58.514116   59960 cri.go:89] found id: ""
	I1126 20:09:58.514138   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.514147   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:58.514153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:58.514220   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:58.547211   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:58.547233   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:58.547239   59960 cri.go:89] found id: ""
	I1126 20:09:58.547257   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:58.547342   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.551299   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.554848   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:58.554921   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:58.583768   59960 cri.go:89] found id: ""
	I1126 20:09:58.583793   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.583802   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:58.583809   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:58.583865   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:58.611601   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:58.611635   59960 cri.go:89] found id: ""
	I1126 20:09:58.611644   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:09:58.611703   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.615732   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:58.615802   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:58.646048   59960 cri.go:89] found id: ""
	I1126 20:09:58.646087   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.646096   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:58.646106   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:58.646135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:58.745296   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:58.745332   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:58.820265   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:58.811642    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.812262    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.813785    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.814448    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.815924    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:58.811642    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.812262    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.813785    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.814448    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.815924    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:58.820294   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:58.820308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:58.877523   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:58.877556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:58.904630   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:58.904656   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:58.980105   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:58.980138   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:58.992220   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:58.992248   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:59.019086   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:59.019112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:59.058229   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:59.058260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:59.106394   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:59.106427   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:59.134445   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:59.134474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:01.667677   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:01.679153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:01.679227   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:01.713101   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:01.713122   59960 cri.go:89] found id: ""
	I1126 20:10:01.713130   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:01.713185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.717042   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:01.717117   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:01.748792   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:01.748817   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:01.748823   59960 cri.go:89] found id: ""
	I1126 20:10:01.748832   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:01.748889   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.752752   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.756411   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:01.756487   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:01.785898   59960 cri.go:89] found id: ""
	I1126 20:10:01.785954   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.785964   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:01.785971   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:01.786033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:01.817470   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:01.817496   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:01.817502   59960 cri.go:89] found id: ""
	I1126 20:10:01.817509   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:01.817567   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.821688   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.826052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:01.826203   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:01.856542   59960 cri.go:89] found id: ""
	I1126 20:10:01.856568   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.856590   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:01.856620   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:01.856742   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:01.893138   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:01.893218   59960 cri.go:89] found id: ""
	I1126 20:10:01.893242   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:01.893337   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.897863   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:01.898026   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:01.935921   59960 cri.go:89] found id: ""
	I1126 20:10:01.935951   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.935961   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:01.935971   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:01.935985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:01.973303   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:01.973332   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:02.028454   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:02.028493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:02.074241   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:02.074272   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:02.162898   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:02.162936   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:02.176057   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:02.176088   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:02.235629   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:02.235665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:02.306607   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:02.306643   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:02.337699   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:02.337729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:02.374553   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:02.374582   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:02.481202   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:02.481238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:02.563313   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:02.555444    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.556211    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.557668    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.558242    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.559786    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:02.555444    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.556211    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.557668    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.558242    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.559786    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:05.064305   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:05.075852   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:05.075925   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:05.108322   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:05.108345   59960 cri.go:89] found id: ""
	I1126 20:10:05.108354   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:05.108410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.112382   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:05.112460   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:05.140946   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:05.141021   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:05.141040   59960 cri.go:89] found id: ""
	I1126 20:10:05.141063   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:05.141150   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.145278   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.148898   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:05.148974   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:05.176423   59960 cri.go:89] found id: ""
	I1126 20:10:05.176450   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.176459   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:05.176466   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:05.176527   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:05.204990   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:05.205013   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:05.205018   59960 cri.go:89] found id: ""
	I1126 20:10:05.205026   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:05.205088   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.208959   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.212627   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:05.212730   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:05.239581   59960 cri.go:89] found id: ""
	I1126 20:10:05.239604   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.239614   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:05.239620   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:05.239679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:05.268087   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:05.268110   59960 cri.go:89] found id: ""
	I1126 20:10:05.268119   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:05.268176   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.271819   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:05.271923   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:05.298753   59960 cri.go:89] found id: ""
	I1126 20:10:05.298819   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.298833   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:05.298843   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:05.298855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:05.325518   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:05.325548   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:05.376406   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:05.376438   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:05.428781   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:05.428943   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:05.459754   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:05.459786   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:05.487550   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:05.487581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:05.520035   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:05.520071   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:05.616425   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:05.616503   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:05.630189   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:05.630221   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:05.715272   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:05.705315    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.706188    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708012    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708749    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.710497    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:05.705315    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.706188    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708012    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708749    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.710497    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:05.715301   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:05.715315   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:05.768473   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:05.768507   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:08.349688   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:08.360619   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:08.360693   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:08.388583   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:08.388610   59960 cri.go:89] found id: ""
	I1126 20:10:08.388619   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:08.388678   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.392264   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:08.392334   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:08.418523   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:08.418549   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:08.418554   59960 cri.go:89] found id: ""
	I1126 20:10:08.418562   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:08.418621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.422368   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.425851   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:08.425954   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:08.456520   59960 cri.go:89] found id: ""
	I1126 20:10:08.456546   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.456555   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:08.456562   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:08.456620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:08.487158   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:08.487182   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:08.487186   59960 cri.go:89] found id: ""
	I1126 20:10:08.487195   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:08.487268   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.491193   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.494690   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:08.494760   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:08.523674   59960 cri.go:89] found id: ""
	I1126 20:10:08.523699   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.523708   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:08.523715   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:08.523773   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:08.569422   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:08.569442   59960 cri.go:89] found id: ""
	I1126 20:10:08.569449   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:08.569505   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.572997   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:08.573065   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:08.599736   59960 cri.go:89] found id: ""
	I1126 20:10:08.599763   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.599772   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:08.599781   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:08.599799   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:08.674461   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:08.665974    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.666705    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.668447    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.669108    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.670766    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:08.665974    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.666705    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.668447    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.669108    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.670766    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:08.674482   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:08.674495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:08.726546   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:08.726591   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:08.783639   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:08.783690   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:08.860709   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:08.860759   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:08.873030   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:08.873058   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:08.899170   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:08.899199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:08.940773   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:08.940855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:08.969671   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:08.969762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:09.001544   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:09.001621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:09.035799   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:09.035837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:11.634159   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:11.645145   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:11.645262   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:11.684091   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:11.684113   59960 cri.go:89] found id: ""
	I1126 20:10:11.684121   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:11.684198   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.687930   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:11.688002   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:11.716342   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:11.716366   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:11.716372   59960 cri.go:89] found id: ""
	I1126 20:10:11.716380   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:11.716438   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.720592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.724106   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:11.724181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:11.750971   59960 cri.go:89] found id: ""
	I1126 20:10:11.750997   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.751007   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:11.751014   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:11.751140   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:11.778888   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:11.778912   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:11.778917   59960 cri.go:89] found id: ""
	I1126 20:10:11.778924   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:11.778979   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.782704   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.786153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:11.786245   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:11.812859   59960 cri.go:89] found id: ""
	I1126 20:10:11.812924   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.812953   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:11.812972   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:11.813047   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:11.844995   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:11.845065   59960 cri.go:89] found id: ""
	I1126 20:10:11.845089   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:11.845159   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.848928   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:11.849056   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:11.878557   59960 cri.go:89] found id: ""
	I1126 20:10:11.878634   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.878657   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:11.878674   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:11.878686   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:11.911996   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:11.912024   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:11.957531   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:11.957700   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:12.002561   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:12.002600   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:12.037611   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:12.037655   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:12.124659   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:12.124695   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:12.157527   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:12.157559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:12.255561   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:12.255597   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:12.270701   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:12.270727   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:12.344084   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:12.335378    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.336132    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.337729    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.338527    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.340203    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:12.335378    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.336132    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.337729    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.338527    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.340203    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:12.344111   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:12.344127   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:12.414064   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:12.414099   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:14.957062   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:14.971279   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:14.971358   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:15.002850   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:15.002871   59960 cri.go:89] found id: ""
	I1126 20:10:15.002879   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:15.002953   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.007210   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:15.007317   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:15.044904   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:15.044929   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:15.044934   59960 cri.go:89] found id: ""
	I1126 20:10:15.044943   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:15.045037   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.050180   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.055192   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:15.055293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:15.087772   59960 cri.go:89] found id: ""
	I1126 20:10:15.087798   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.087815   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:15.087822   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:15.087883   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:15.117095   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:15.117114   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:15.117119   59960 cri.go:89] found id: ""
	I1126 20:10:15.117127   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:15.117185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.120995   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.124760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:15.124885   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:15.157854   59960 cri.go:89] found id: ""
	I1126 20:10:15.157954   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.157994   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:15.158017   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:15.158084   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:15.190383   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:15.190407   59960 cri.go:89] found id: ""
	I1126 20:10:15.190417   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:15.190474   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.194524   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:15.194624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:15.223311   59960 cri.go:89] found id: ""
	I1126 20:10:15.223337   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.223346   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:15.223355   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:15.223366   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:15.236105   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:15.236134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:15.263408   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:15.263436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:15.308099   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:15.308133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:15.370222   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:15.370258   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:15.412978   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:15.413009   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:15.482330   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:15.473679    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.474420    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476124    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476749    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.478398    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:15.473679    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.474420    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476124    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476749    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.478398    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:15.482403   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:15.482428   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:15.528305   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:15.528335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:15.564111   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:15.564138   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:15.592541   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:15.592569   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:15.673319   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:15.673357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:18.279646   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:18.290358   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:18.290427   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:18.319136   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:18.319159   59960 cri.go:89] found id: ""
	I1126 20:10:18.319168   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:18.319225   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.322893   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:18.322967   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:18.350092   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:18.350120   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:18.350126   59960 cri.go:89] found id: ""
	I1126 20:10:18.350139   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:18.350193   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.354777   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.358503   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:18.358602   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:18.396162   59960 cri.go:89] found id: ""
	I1126 20:10:18.396185   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.396193   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:18.396199   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:18.396262   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:18.430093   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:18.430119   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:18.430124   59960 cri.go:89] found id: ""
	I1126 20:10:18.430131   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:18.430196   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.434456   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.438374   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:18.438451   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:18.478030   59960 cri.go:89] found id: ""
	I1126 20:10:18.478058   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.478070   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:18.478076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:18.478137   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:18.506317   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:18.506340   59960 cri.go:89] found id: ""
	I1126 20:10:18.506349   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:18.506410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.510476   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:18.510552   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:18.550337   59960 cri.go:89] found id: ""
	I1126 20:10:18.550408   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.550436   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:18.550454   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:18.550487   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:18.621602   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:18.613602    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.614230    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.615899    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.616339    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.617881    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:18.613602    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.614230    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.615899    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.616339    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.617881    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:18.621625   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:18.621638   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:18.648795   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:18.648824   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:18.691314   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:18.691358   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:18.771327   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:18.771367   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:18.808287   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:18.808319   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:18.907011   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:18.907048   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:18.919575   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:18.919605   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:18.961664   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:18.961697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:19.020056   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:19.020092   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:19.050179   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:19.050206   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:21.599106   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:21.611209   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:21.611309   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:21.639207   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:21.639229   59960 cri.go:89] found id: ""
	I1126 20:10:21.639238   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:21.639296   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.643290   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:21.643365   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:21.675608   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:21.675633   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:21.675639   59960 cri.go:89] found id: ""
	I1126 20:10:21.675648   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:21.675702   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.679772   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.683385   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:21.683511   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:21.719004   59960 cri.go:89] found id: ""
	I1126 20:10:21.719078   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.719102   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:21.719123   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:21.719196   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:21.745555   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:21.745634   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:21.745660   59960 cri.go:89] found id: ""
	I1126 20:10:21.745681   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:21.745750   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.750313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.753830   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:21.753907   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:21.781119   59960 cri.go:89] found id: ""
	I1126 20:10:21.781199   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.781222   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:21.781243   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:21.781347   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:21.809894   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:21.810006   59960 cri.go:89] found id: ""
	I1126 20:10:21.810022   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:21.810092   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.813756   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:21.813853   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:21.840725   59960 cri.go:89] found id: ""
	I1126 20:10:21.840751   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.840760   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:21.840769   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:21.840781   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:21.854145   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:21.854177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:21.884873   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:21.884902   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:21.936427   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:21.936463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:21.990170   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:21.990205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:22.077016   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:22.077064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:22.106941   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:22.106974   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:22.136672   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:22.136703   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:22.235594   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:22.235630   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:22.305008   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:22.295860    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.296666    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.298548    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.299084    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.300765    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:22.295860    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.296666    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.298548    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.299084    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.300765    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:22.305032   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:22.305046   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:22.378673   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:22.378711   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:24.920612   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:24.931941   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:24.932015   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:24.958956   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:24.958979   59960 cri.go:89] found id: ""
	I1126 20:10:24.958988   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:24.959047   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.962853   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:24.962931   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:24.989108   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:24.989130   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:24.989134   59960 cri.go:89] found id: ""
	I1126 20:10:24.989141   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:24.989195   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.992756   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.996360   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:24.996431   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:25.023636   59960 cri.go:89] found id: ""
	I1126 20:10:25.023660   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.023670   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:25.023676   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:25.023751   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:25.056300   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:25.056325   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:25.056331   59960 cri.go:89] found id: ""
	I1126 20:10:25.056339   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:25.056407   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.060822   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.066693   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:25.066825   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:25.098171   59960 cri.go:89] found id: ""
	I1126 20:10:25.098239   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.098258   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:25.098265   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:25.098344   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:25.129634   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:25.129655   59960 cri.go:89] found id: ""
	I1126 20:10:25.129664   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:25.129759   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.134599   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:25.134715   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:25.166870   59960 cri.go:89] found id: ""
	I1126 20:10:25.166896   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.166905   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:25.166918   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:25.166931   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:25.201303   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:25.201335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:25.234106   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:25.234132   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:25.335293   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:25.335329   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:25.367895   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:25.367920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:25.408499   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:25.408540   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:25.489459   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:25.489496   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:25.525614   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:25.525642   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:25.540937   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:25.541079   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:25.619457   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:25.611129    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.611986    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.613567    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.614319    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.615842    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:25.611129    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.611986    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.613567    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.614319    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.615842    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:25.619480   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:25.619494   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:25.667380   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:25.667419   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.233076   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:28.244698   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:28.244770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:28.272507   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:28.272530   59960 cri.go:89] found id: ""
	I1126 20:10:28.272538   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:28.272596   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.276257   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:28.276333   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:28.303315   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:28.303337   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:28.303342   59960 cri.go:89] found id: ""
	I1126 20:10:28.303349   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:28.303429   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.307300   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.310655   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:28.310727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:28.337118   59960 cri.go:89] found id: ""
	I1126 20:10:28.337140   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.337150   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:28.337156   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:28.337214   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:28.364328   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.364352   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:28.364358   59960 cri.go:89] found id: ""
	I1126 20:10:28.364374   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:28.364436   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.368741   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.372299   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:28.372385   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:28.398315   59960 cri.go:89] found id: ""
	I1126 20:10:28.398342   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.398351   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:28.398357   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:28.398418   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:28.426255   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:28.426276   59960 cri.go:89] found id: ""
	I1126 20:10:28.426287   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:28.426342   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.429863   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:28.430017   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:28.456908   59960 cri.go:89] found id: ""
	I1126 20:10:28.456933   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.456942   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:28.456951   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:28.456962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:28.532783   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:28.532820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:28.637119   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:28.637160   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:28.711269   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:28.702783    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.703978    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.704633    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706176    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706692    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:28.702783    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.703978    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.704633    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706176    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706692    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:28.711288   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:28.711304   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:28.737855   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:28.737883   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:28.789442   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:28.789477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:28.820705   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:28.820738   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:28.855530   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:28.855560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:28.868297   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:28.868324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:28.913639   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:28.913673   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.973350   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:28.973386   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:31.500924   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:31.511869   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:31.511943   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:31.546414   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:31.546447   59960 cri.go:89] found id: ""
	I1126 20:10:31.546456   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:31.546559   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.550296   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:31.550368   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:31.577840   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:31.577859   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:31.577864   59960 cri.go:89] found id: ""
	I1126 20:10:31.577870   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:31.577967   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.581789   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.585352   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:31.585421   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:31.616396   59960 cri.go:89] found id: ""
	I1126 20:10:31.616419   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.616428   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:31.616435   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:31.616491   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:31.641907   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:31.641971   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:31.641977   59960 cri.go:89] found id: ""
	I1126 20:10:31.641984   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:31.642048   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.645886   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.649651   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:31.649732   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:31.682488   59960 cri.go:89] found id: ""
	I1126 20:10:31.682512   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.682521   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:31.682527   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:31.682597   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:31.713608   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:31.713632   59960 cri.go:89] found id: ""
	I1126 20:10:31.713641   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:31.713693   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.717274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:31.717349   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:31.750907   59960 cri.go:89] found id: ""
	I1126 20:10:31.750934   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.750948   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:31.750957   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:31.750970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:31.822403   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:31.813458    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.814237    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.815876    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.816493    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.818239    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:31.813458    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.814237    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.815876    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.816493    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.818239    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:31.822425   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:31.822440   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:31.849676   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:31.849705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:31.891923   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:31.891959   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:31.944564   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:31.944608   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:32.015493   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:32.015577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:32.047447   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:32.047480   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:32.127183   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:32.127225   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:32.229734   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:32.229767   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:32.243678   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:32.243719   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:32.271264   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:32.271291   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:34.809253   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:34.819692   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:34.819817   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:34.846220   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:34.846240   59960 cri.go:89] found id: ""
	I1126 20:10:34.846248   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:34.846302   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.849960   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:34.850035   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:34.875486   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:34.875510   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:34.875515   59960 cri.go:89] found id: ""
	I1126 20:10:34.875522   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:34.875591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.879655   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.883266   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:34.883341   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:34.910257   59960 cri.go:89] found id: ""
	I1126 20:10:34.910286   59960 logs.go:282] 0 containers: []
	W1126 20:10:34.910295   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:34.910302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:34.910359   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:34.936501   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:34.936526   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:34.936531   59960 cri.go:89] found id: ""
	I1126 20:10:34.936539   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:34.936602   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.940297   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.943886   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:34.943960   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:34.970440   59960 cri.go:89] found id: ""
	I1126 20:10:34.970467   59960 logs.go:282] 0 containers: []
	W1126 20:10:34.970476   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:34.970482   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:34.970540   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:34.996813   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:34.996833   59960 cri.go:89] found id: ""
	I1126 20:10:34.996842   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:34.996901   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:35.000962   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:35.001030   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:35.029207   59960 cri.go:89] found id: ""
	I1126 20:10:35.029229   59960 logs.go:282] 0 containers: []
	W1126 20:10:35.029237   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:35.029247   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:35.029259   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:35.089280   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:35.089316   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:35.137518   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:35.137557   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:35.198701   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:35.198741   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:35.226526   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:35.226560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:35.308302   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:35.308341   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:35.411713   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:35.411751   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:35.425089   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:35.425118   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:35.496500   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:35.487044    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.487890    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.489861    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.490651    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.492443    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:35.487044    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.487890    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.489861    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.490651    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.492443    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:35.496523   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:35.496538   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:35.521713   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:35.521740   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:35.552491   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:35.552520   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:38.092147   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:38.105386   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:38.105494   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:38.134115   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:38.134183   59960 cri.go:89] found id: ""
	I1126 20:10:38.134204   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:38.134297   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.138342   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:38.138463   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:38.165373   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:38.165448   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:38.165468   59960 cri.go:89] found id: ""
	I1126 20:10:38.165492   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:38.165591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.169464   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.173100   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:38.173220   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:38.201795   59960 cri.go:89] found id: ""
	I1126 20:10:38.201818   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.201826   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:38.201836   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:38.201895   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:38.234752   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:38.234776   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:38.234782   59960 cri.go:89] found id: ""
	I1126 20:10:38.234789   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:38.234845   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.239023   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.242779   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:38.242854   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:38.271155   59960 cri.go:89] found id: ""
	I1126 20:10:38.271184   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.271193   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:38.271200   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:38.271261   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:38.298657   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:38.298682   59960 cri.go:89] found id: ""
	I1126 20:10:38.298691   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:38.298766   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.302858   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:38.302929   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:38.330494   59960 cri.go:89] found id: ""
	I1126 20:10:38.330520   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.330529   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:38.330538   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:38.330570   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:38.356340   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:38.356374   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:38.401509   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:38.401542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:38.463681   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:38.463719   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:38.496848   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:38.496881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:38.524848   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:38.524875   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:38.607033   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:38.607098   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:38.709803   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:38.709840   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:38.722963   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:38.722995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:38.796592   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:38.787909    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.788704    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.790425    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.791012    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.792912    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:38.787909    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.788704    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.790425    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.791012    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.792912    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:38.796617   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:38.796635   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:38.836671   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:38.836707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:41.373598   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:41.384711   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:41.384792   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:41.414012   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:41.414038   59960 cri.go:89] found id: ""
	I1126 20:10:41.414047   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:41.414103   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.417961   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:41.418036   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:41.450051   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:41.450076   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:41.450082   59960 cri.go:89] found id: ""
	I1126 20:10:41.450089   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:41.450147   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.455240   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.459174   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:41.459275   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:41.487216   59960 cri.go:89] found id: ""
	I1126 20:10:41.487241   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.487250   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:41.487257   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:41.487340   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:41.515666   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:41.515739   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:41.515751   59960 cri.go:89] found id: ""
	I1126 20:10:41.515759   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:41.515817   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.519735   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.523565   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:41.523639   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:41.554213   59960 cri.go:89] found id: ""
	I1126 20:10:41.554240   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.554250   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:41.554256   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:41.554321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:41.584766   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:41.584790   59960 cri.go:89] found id: ""
	I1126 20:10:41.584799   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:41.584861   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.589437   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:41.589510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:41.616610   59960 cri.go:89] found id: ""
	I1126 20:10:41.616638   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.616648   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:41.616657   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:41.616669   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:41.696316   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:41.696352   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:41.765798   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:41.758434    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.758824    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760333    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760643    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.762180    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:41.758434    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.758824    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760333    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760643    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.762180    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:41.765870   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:41.765900   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:41.791490   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:41.791517   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:41.827993   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:41.828022   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:41.854480   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:41.854511   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:41.885603   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:41.885632   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:41.984936   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:41.984970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:41.997672   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:41.997701   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:42.039613   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:42.039668   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:42.100317   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:42.100359   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:44.745690   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:44.756208   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:44.756277   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:44.793586   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:44.793606   59960 cri.go:89] found id: ""
	I1126 20:10:44.793614   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:44.793666   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.797466   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:44.797561   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:44.823288   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:44.823313   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:44.823319   59960 cri.go:89] found id: ""
	I1126 20:10:44.823326   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:44.823383   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.828270   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.832190   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:44.832260   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:44.858643   59960 cri.go:89] found id: ""
	I1126 20:10:44.858694   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.858704   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:44.858711   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:44.858772   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:44.887625   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:44.887711   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:44.887722   59960 cri.go:89] found id: ""
	I1126 20:10:44.887730   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:44.887791   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.891593   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.895076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:44.895151   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:44.924994   59960 cri.go:89] found id: ""
	I1126 20:10:44.925060   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.925085   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:44.925104   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:44.925196   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:44.951783   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:44.951807   59960 cri.go:89] found id: ""
	I1126 20:10:44.951816   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:44.951874   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.955505   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:44.955620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:44.982789   59960 cri.go:89] found id: ""
	I1126 20:10:44.982814   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.982822   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:44.982831   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:44.982843   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:45.010557   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:45.010586   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:45.141549   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:45.141632   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:45.253485   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:45.253554   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:45.353619   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:45.353660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:45.408761   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:45.408795   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:45.443664   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:45.443692   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:45.470742   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:45.470773   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:45.504515   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:45.504544   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:45.608220   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:45.608254   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:45.620732   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:45.620761   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:45.707896   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:45.695026    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.696388    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.697297    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.699791    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.700340    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:45.695026    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.696388    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.697297    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.699791    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.700340    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:48.209609   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:48.220742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:48.220811   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:48.247863   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:48.247886   59960 cri.go:89] found id: ""
	I1126 20:10:48.247894   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:48.247949   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.251929   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:48.251997   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:48.280449   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:48.280470   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:48.280475   59960 cri.go:89] found id: ""
	I1126 20:10:48.280483   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:48.280537   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.284732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.288315   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:48.288405   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:48.316409   59960 cri.go:89] found id: ""
	I1126 20:10:48.316432   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.316440   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:48.316446   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:48.316506   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:48.349208   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:48.349271   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:48.349289   59960 cri.go:89] found id: ""
	I1126 20:10:48.349316   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:48.349408   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.354353   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.357751   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:48.357848   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:48.385059   59960 cri.go:89] found id: ""
	I1126 20:10:48.385081   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.385090   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:48.385107   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:48.385185   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:48.411304   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:48.411326   59960 cri.go:89] found id: ""
	I1126 20:10:48.411334   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:48.411405   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.415053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:48.415156   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:48.441024   59960 cri.go:89] found id: ""
	I1126 20:10:48.441046   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.441055   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:48.441063   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:48.441075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:48.469644   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:48.469672   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:48.510776   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:48.510859   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:48.592885   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:48.592917   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:48.620191   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:48.620216   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:48.715671   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:48.715746   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:48.730976   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:48.731004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:48.784446   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:48.784483   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:48.816189   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:48.816220   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:48.894569   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:48.894607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:48.934181   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:48.934214   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:49.000322   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:48.992247    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.992990    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994167    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994648    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.996101    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:48.992247    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.992990    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994167    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994648    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.996101    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:51.500568   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:51.512500   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:51.512570   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:51.550166   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:51.550188   59960 cri.go:89] found id: ""
	I1126 20:10:51.550196   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:51.550253   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.554115   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:51.554221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:51.580857   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:51.580880   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:51.580885   59960 cri.go:89] found id: ""
	I1126 20:10:51.580893   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:51.580949   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.584903   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.588661   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:51.588730   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:51.620121   59960 cri.go:89] found id: ""
	I1126 20:10:51.620147   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.620156   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:51.620163   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:51.620225   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:51.648043   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:51.648066   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:51.648071   59960 cri.go:89] found id: ""
	I1126 20:10:51.648079   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:51.648144   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.652146   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.656590   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:51.656658   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:51.684798   59960 cri.go:89] found id: ""
	I1126 20:10:51.684825   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.684835   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:51.684842   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:51.684900   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:51.712247   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:51.712270   59960 cri.go:89] found id: ""
	I1126 20:10:51.712279   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:51.712334   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.716105   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:51.716235   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:51.755296   59960 cri.go:89] found id: ""
	I1126 20:10:51.755373   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.755389   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:51.755400   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:51.755412   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:51.782840   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:51.782871   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:51.826403   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:51.826436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:51.894112   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:51.894148   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:51.920185   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:51.920212   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:51.993815   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:51.993856   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:52.030774   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:52.030804   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:52.112821   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:52.103396    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.104540    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.105295    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.106939    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.107489    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:52.103396    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.104540    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.105295    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.106939    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.107489    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:52.112847   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:52.112861   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:52.161738   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:52.161771   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:52.193340   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:52.193368   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:52.291814   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:52.291862   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:54.810104   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:54.820898   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:54.820971   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:54.849431   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:54.849454   59960 cri.go:89] found id: ""
	I1126 20:10:54.849462   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:54.849524   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.853394   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:54.853465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:54.879833   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:54.879855   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:54.879860   59960 cri.go:89] found id: ""
	I1126 20:10:54.879867   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:54.879926   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.883636   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.887200   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:54.887280   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:54.913349   59960 cri.go:89] found id: ""
	I1126 20:10:54.913374   59960 logs.go:282] 0 containers: []
	W1126 20:10:54.913382   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:54.913389   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:54.913446   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:54.941189   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:54.941215   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:54.941221   59960 cri.go:89] found id: ""
	I1126 20:10:54.941229   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:54.941285   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.945133   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.948594   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:54.948673   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:54.977649   59960 cri.go:89] found id: ""
	I1126 20:10:54.977677   59960 logs.go:282] 0 containers: []
	W1126 20:10:54.977687   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:54.977693   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:54.977768   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:55.008912   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:55.008938   59960 cri.go:89] found id: ""
	I1126 20:10:55.008948   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:55.009005   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:55.012659   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:55.012727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:55.056313   59960 cri.go:89] found id: ""
	I1126 20:10:55.056393   59960 logs.go:282] 0 containers: []
	W1126 20:10:55.056419   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:55.056449   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:55.056478   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:55.170137   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:55.170180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:55.194458   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:55.194489   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:55.279906   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:55.272019    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.272480    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274150    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274543    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.276078    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:55.272019    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.272480    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274150    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274543    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.276078    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:55.279931   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:55.279945   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:55.321902   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:55.321949   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:55.351446   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:55.351474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:55.426688   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:55.426723   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:55.463472   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:55.463501   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:55.510565   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:55.510598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:55.580501   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:55.580534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:55.614574   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:55.614602   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:58.162969   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:58.173910   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:58.174019   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:58.202329   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:58.202352   59960 cri.go:89] found id: ""
	I1126 20:10:58.202360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:58.202415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.206274   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:58.206347   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:58.233721   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:58.233741   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:58.233745   59960 cri.go:89] found id: ""
	I1126 20:10:58.233753   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:58.233811   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.237802   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.242346   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:58.242419   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:58.271013   59960 cri.go:89] found id: ""
	I1126 20:10:58.271038   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.271047   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:58.271053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:58.271109   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:58.298515   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:58.298538   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:58.298553   59960 cri.go:89] found id: ""
	I1126 20:10:58.298560   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:58.298617   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.302497   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.306172   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:58.306241   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:58.331672   59960 cri.go:89] found id: ""
	I1126 20:10:58.331698   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.331707   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:58.331714   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:58.331819   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:58.359197   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:58.359219   59960 cri.go:89] found id: ""
	I1126 20:10:58.359228   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:58.359307   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.363274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:58.363346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:58.403777   59960 cri.go:89] found id: ""
	I1126 20:10:58.403804   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.403814   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:58.403829   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:58.403890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:58.504667   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:58.504702   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:58.517722   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:58.517750   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:58.589740   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:58.581328    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.582205    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.583896    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.584218    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.585780    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:58.581328    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.582205    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.583896    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.584218    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.585780    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:58.589761   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:58.589774   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:58.617621   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:58.617648   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:58.660238   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:58.660281   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:58.709585   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:58.709624   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:58.783550   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:58.783586   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:58.820181   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:58.820219   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:58.848533   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:58.848564   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:58.921350   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:58.921390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:01.453687   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:01.467262   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:01.467365   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:01.498662   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:01.498715   59960 cri.go:89] found id: ""
	I1126 20:11:01.498724   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:01.498785   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.504322   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:01.504445   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:01.545072   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:01.545098   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:01.545105   59960 cri.go:89] found id: ""
	I1126 20:11:01.545113   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:01.545185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.548993   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.552685   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:01.552797   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:01.582855   59960 cri.go:89] found id: ""
	I1126 20:11:01.582881   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.582891   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:01.582897   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:01.582954   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:01.613527   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:01.613548   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:01.613553   59960 cri.go:89] found id: ""
	I1126 20:11:01.613560   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:01.613629   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.618859   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.623550   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:01.623624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:01.660116   59960 cri.go:89] found id: ""
	I1126 20:11:01.660140   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.660149   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:01.660159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:01.660221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:01.692418   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:01.692442   59960 cri.go:89] found id: ""
	I1126 20:11:01.692450   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:01.692509   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.696379   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:01.696453   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:01.729407   59960 cri.go:89] found id: ""
	I1126 20:11:01.729430   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.729439   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:01.729447   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:01.729463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:01.784458   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:01.784492   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:01.872850   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:01.872886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:01.903039   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:01.903068   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:01.942057   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:01.942084   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:02.024475   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:02.024514   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:02.128096   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:02.128133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:02.199528   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:02.191565    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.192150    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.193873    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.194411    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.195999    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:02.191565    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.192150    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.193873    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.194411    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.195999    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:02.199554   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:02.199568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:02.226949   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:02.226985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:02.270517   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:02.270555   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:02.306879   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:02.306948   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:04.822921   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:04.834951   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:04.835018   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:04.862163   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:04.862219   59960 cri.go:89] found id: ""
	I1126 20:11:04.862244   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:04.862312   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.865957   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:04.866029   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:04.895638   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:04.895658   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:04.895663   59960 cri.go:89] found id: ""
	I1126 20:11:04.895669   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:04.895722   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.899645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.903838   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:04.903909   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:04.929326   59960 cri.go:89] found id: ""
	I1126 20:11:04.929389   59960 logs.go:282] 0 containers: []
	W1126 20:11:04.929422   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:04.929442   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:04.929522   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:04.956401   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:04.956472   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:04.956491   59960 cri.go:89] found id: ""
	I1126 20:11:04.956522   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:04.956593   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.960195   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.963812   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:04.963930   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:04.990366   59960 cri.go:89] found id: ""
	I1126 20:11:04.990387   59960 logs.go:282] 0 containers: []
	W1126 20:11:04.990395   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:04.990402   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:04.990468   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:05.019718   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:05.019752   59960 cri.go:89] found id: ""
	I1126 20:11:05.019762   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:05.019824   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:05.023681   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:05.023779   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:05.053886   59960 cri.go:89] found id: ""
	I1126 20:11:05.053915   59960 logs.go:282] 0 containers: []
	W1126 20:11:05.053953   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:05.053963   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:05.053994   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:05.152926   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:05.152963   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:05.165506   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:05.165534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:05.194915   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:05.194945   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:05.235104   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:05.235137   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:05.285215   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:05.285247   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:05.314134   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:05.314162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:05.341007   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:05.341034   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:05.418277   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:05.418313   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:05.491273   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:05.482790    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.483758    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.485510    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.486097    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.487714    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:05.482790    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.483758    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.485510    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.486097    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.487714    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:05.491294   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:05.491308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:05.552151   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:05.552187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:08.086064   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:08.097504   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:08.097574   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:08.126757   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:08.126780   59960 cri.go:89] found id: ""
	I1126 20:11:08.126789   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:08.126851   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.131043   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:08.131119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:08.158212   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:08.158274   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:08.158289   59960 cri.go:89] found id: ""
	I1126 20:11:08.158297   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:08.158360   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.162104   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.166980   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:08.167053   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:08.193258   59960 cri.go:89] found id: ""
	I1126 20:11:08.193290   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.193300   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:08.193307   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:08.193374   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:08.219187   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:08.219210   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:08.219216   59960 cri.go:89] found id: ""
	I1126 20:11:08.219234   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:08.219313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.223489   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.227150   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:08.227228   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:08.255318   59960 cri.go:89] found id: ""
	I1126 20:11:08.255340   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.255348   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:08.255355   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:08.255411   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:08.282171   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:08.282194   59960 cri.go:89] found id: ""
	I1126 20:11:08.282202   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:08.282273   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.285788   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:08.285852   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:08.315430   59960 cri.go:89] found id: ""
	I1126 20:11:08.315505   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.315538   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:08.315560   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:08.315580   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:08.345199   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:08.345268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:08.441184   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:08.441220   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:08.511176   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:08.500509    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.501151    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504004    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504546    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.506870    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:08.500509    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.501151    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504004    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504546    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.506870    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:08.511208   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:08.511222   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:08.543421   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:08.543450   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:08.604175   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:08.604207   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:08.632557   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:08.632623   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:08.663480   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:08.663506   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:08.675096   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:08.675127   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:08.713968   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:08.713998   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:08.759141   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:08.759176   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:11.351574   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:11.361875   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:11.361972   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:11.388446   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:11.388515   59960 cri.go:89] found id: ""
	I1126 20:11:11.388529   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:11.388594   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.392093   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:11.392176   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:11.421855   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:11.421875   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:11.421880   59960 cri.go:89] found id: ""
	I1126 20:11:11.421887   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:11.421974   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.425675   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.429670   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:11.429770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:11.455248   59960 cri.go:89] found id: ""
	I1126 20:11:11.455272   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.455280   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:11.455287   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:11.455349   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:11.481734   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:11.481755   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:11.481761   59960 cri.go:89] found id: ""
	I1126 20:11:11.481769   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:11.481841   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.485836   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.489303   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:11.489380   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:11.521985   59960 cri.go:89] found id: ""
	I1126 20:11:11.522011   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.522020   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:11.522036   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:11.522095   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:11.561668   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:11.561700   59960 cri.go:89] found id: ""
	I1126 20:11:11.561708   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:11.561772   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.565986   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:11.566063   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:11.594364   59960 cri.go:89] found id: ""
	I1126 20:11:11.594386   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.594395   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:11.594404   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:11.594440   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:11.639020   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:11.639057   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:11.709026   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:11.709063   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:11.739742   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:11.739771   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:11.806014   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:11.797164    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798194    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798970    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.800645    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.801154    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:11.797164    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798194    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798970    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.800645    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.801154    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:11.806036   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:11.806048   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:11.844958   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:11.844991   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:11.876607   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:11.876634   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:11.911651   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:11.911677   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:11.991136   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:11.991170   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:12.094606   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:12.094650   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:12.107579   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:12.107609   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:14.637133   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:14.648286   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:14.648355   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:14.678404   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:14.678427   59960 cri.go:89] found id: ""
	I1126 20:11:14.678435   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:14.678495   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.682257   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:14.682330   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:14.713744   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:14.713765   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:14.713770   59960 cri.go:89] found id: ""
	I1126 20:11:14.713777   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:14.713835   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.718000   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.721792   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:14.721916   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:14.753701   59960 cri.go:89] found id: ""
	I1126 20:11:14.753767   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.753793   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:14.753812   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:14.753951   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:14.782584   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:14.782609   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:14.782615   59960 cri.go:89] found id: ""
	I1126 20:11:14.782622   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:14.782679   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.786288   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.790091   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:14.790165   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:14.816545   59960 cri.go:89] found id: ""
	I1126 20:11:14.816570   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.816579   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:14.816586   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:14.816642   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:14.846080   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:14.846100   59960 cri.go:89] found id: ""
	I1126 20:11:14.846108   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:14.846166   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.849789   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:14.849880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:14.876460   59960 cri.go:89] found id: ""
	I1126 20:11:14.876491   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.876500   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:14.876508   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:14.876518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:14.951236   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:14.951274   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:14.983322   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:14.983350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:15.061107   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:15.051102    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.052170    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.053243    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.054378    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.056334    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:15.051102    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.052170    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.053243    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.054378    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.056334    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:15.061129   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:15.061144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:15.097557   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:15.097587   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:15.138293   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:15.138326   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:15.168503   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:15.168532   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:15.267115   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:15.267150   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:15.279584   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:15.279615   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:15.326150   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:15.326184   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:15.389193   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:15.389226   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:17.918406   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:17.929053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:17.929122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:17.953884   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:17.953945   59960 cri.go:89] found id: ""
	I1126 20:11:17.953954   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:17.954015   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.957395   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:17.957465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:17.983711   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:17.983731   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:17.983735   59960 cri.go:89] found id: ""
	I1126 20:11:17.983742   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:17.983795   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.987660   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.991154   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:17.991224   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:18.019969   59960 cri.go:89] found id: ""
	I1126 20:11:18.019998   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.020008   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:18.020015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:18.020073   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:18.061149   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:18.061172   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:18.061178   59960 cri.go:89] found id: ""
	I1126 20:11:18.061186   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:18.061246   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.065578   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.069815   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:18.069885   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:18.096457   59960 cri.go:89] found id: ""
	I1126 20:11:18.096479   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.096487   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:18.096494   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:18.096554   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:18.124303   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:18.124367   59960 cri.go:89] found id: ""
	I1126 20:11:18.124392   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:18.124471   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.130707   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:18.130839   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:18.156714   59960 cri.go:89] found id: ""
	I1126 20:11:18.156740   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.156750   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:18.156759   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:18.156773   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:18.233800   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:18.233837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:18.264943   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:18.264973   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:18.343435   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:18.335872    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.336444    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.337906    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.338530    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.339816    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:18.335872    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.336444    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.337906    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.338530    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.339816    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:18.343458   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:18.343470   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:18.372998   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:18.373026   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:18.416461   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:18.416495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:18.445233   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:18.445263   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:18.545748   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:18.545787   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:18.557806   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:18.557835   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:18.622509   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:18.622542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:18.707610   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:18.707689   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:21.236452   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:21.247662   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:21.247729   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:21.276004   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:21.276030   59960 cri.go:89] found id: ""
	I1126 20:11:21.276038   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:21.276125   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.279851   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:21.279945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:21.309267   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:21.309291   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:21.309297   59960 cri.go:89] found id: ""
	I1126 20:11:21.309304   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:21.309359   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.313384   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.317026   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:21.317099   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:21.347773   59960 cri.go:89] found id: ""
	I1126 20:11:21.347799   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.347807   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:21.347817   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:21.347901   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:21.389878   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:21.389898   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:21.389902   59960 cri.go:89] found id: ""
	I1126 20:11:21.389910   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:21.390028   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.396218   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.405704   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:21.405823   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:21.458505   59960 cri.go:89] found id: ""
	I1126 20:11:21.458573   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.458605   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:21.458635   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:21.458731   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:21.486896   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:21.486961   59960 cri.go:89] found id: ""
	I1126 20:11:21.486983   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:21.487052   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.490729   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:21.490845   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:21.521776   59960 cri.go:89] found id: ""
	I1126 20:11:21.521798   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.521806   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:21.521815   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:21.521827   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:21.540126   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:21.540201   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:21.612034   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:21.604355    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.605075    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.606757    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.607410    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.608381    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:21.604355    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.605075    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.606757    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.607410    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.608381    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:21.612058   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:21.612072   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:21.658622   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:21.658657   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:21.707807   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:21.707844   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:21.769271   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:21.769306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:21.801295   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:21.801325   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:21.896605   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:21.896639   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:21.929176   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:21.929205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:21.967857   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:21.967884   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:22.001350   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:22.001375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:24.595423   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:24.606910   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:24.606980   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:24.638795   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:24.638819   59960 cri.go:89] found id: ""
	I1126 20:11:24.638827   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:24.638885   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.642601   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:24.642677   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:24.709965   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:24.709984   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:24.709989   59960 cri.go:89] found id: ""
	I1126 20:11:24.709996   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:24.710075   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.714848   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.719509   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:24.719668   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:24.756426   59960 cri.go:89] found id: ""
	I1126 20:11:24.756497   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.756521   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:24.756540   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:24.756658   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:24.803189   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:24.803256   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:24.803274   59960 cri.go:89] found id: ""
	I1126 20:11:24.803295   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:24.803379   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.808196   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.812071   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:24.812194   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:24.852305   59960 cri.go:89] found id: ""
	I1126 20:11:24.852378   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.852408   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:24.852429   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:24.852520   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:24.889194   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:24.889263   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:24.889294   59960 cri.go:89] found id: ""
	I1126 20:11:24.889320   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:24.889413   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.893347   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.897224   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:24.897334   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:24.930230   59960 cri.go:89] found id: ""
	I1126 20:11:24.930304   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.930333   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:24.930344   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:24.930371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:25.035563   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:25.035604   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:25.054082   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:25.054112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:25.096053   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:25.096081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:25.145970   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:25.146007   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:25.185648   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:25.185678   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:25.214168   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:25.214199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:25.247077   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:25.247106   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:25.338812   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:25.330325    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.331301    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.332972    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.333487    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.335076    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:25.330325    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.331301    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.332972    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.333487    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.335076    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:25.338839   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:25.338854   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:25.379564   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:25.379600   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:25.447694   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:25.447730   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:25.472568   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:25.472598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:28.058550   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:28.076007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:28.076082   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:28.106329   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:28.106351   59960 cri.go:89] found id: ""
	I1126 20:11:28.106360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:28.106418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.110514   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:28.110591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:28.140757   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:28.140777   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:28.140782   59960 cri.go:89] found id: ""
	I1126 20:11:28.140789   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:28.140842   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.144844   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.148401   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:28.148473   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:28.174921   59960 cri.go:89] found id: ""
	I1126 20:11:28.174944   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.174953   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:28.174959   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:28.175022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:28.202405   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:28.202425   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:28.202429   59960 cri.go:89] found id: ""
	I1126 20:11:28.202436   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:28.202491   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.207455   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.211480   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:28.211548   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:28.239676   59960 cri.go:89] found id: ""
	I1126 20:11:28.239749   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.239773   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:28.239793   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:28.239857   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:28.269256   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:28.269277   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:28.269282   59960 cri.go:89] found id: ""
	I1126 20:11:28.269289   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:28.269344   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.273004   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.276329   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:28.276398   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:28.302206   59960 cri.go:89] found id: ""
	I1126 20:11:28.302272   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.302298   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:28.302321   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:28.302363   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:28.332034   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:28.332062   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:28.376567   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:28.376603   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:28.441530   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:28.441568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:28.468188   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:28.468219   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:28.544745   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:28.544780   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:28.590841   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:28.590870   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:28.603163   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:28.603194   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:28.675368   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:28.666467    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.667143    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.668892    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.669848    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.671529    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:28.666467    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.667143    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.668892    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.669848    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.671529    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:28.675390   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:28.675403   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:28.716129   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:28.716160   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:28.746889   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:28.746916   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:28.784649   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:28.784678   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:31.386032   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:31.396663   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:31.396729   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:31.424252   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:31.424274   59960 cri.go:89] found id: ""
	I1126 20:11:31.424282   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:31.424337   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.427909   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:31.427983   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:31.459053   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:31.459075   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:31.459080   59960 cri.go:89] found id: ""
	I1126 20:11:31.459088   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:31.459148   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.462802   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.466564   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:31.466687   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:31.497981   59960 cri.go:89] found id: ""
	I1126 20:11:31.498003   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.498012   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:31.498018   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:31.498110   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:31.526027   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:31.526052   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:31.526057   59960 cri.go:89] found id: ""
	I1126 20:11:31.526065   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:31.526170   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.529987   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.534855   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:31.534945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:31.563109   59960 cri.go:89] found id: ""
	I1126 20:11:31.563169   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.563198   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:31.563219   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:31.563293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:31.589243   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:31.589265   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:31.589270   59960 cri.go:89] found id: ""
	I1126 20:11:31.589278   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:31.589354   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.593459   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.596946   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:31.597021   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:31.623525   59960 cri.go:89] found id: ""
	I1126 20:11:31.623558   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.623567   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:31.623576   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:31.623587   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:31.652294   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:31.652373   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:31.735258   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:31.735294   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:31.768608   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:31.768683   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:31.870428   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:31.870508   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:31.897014   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:31.897042   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:32.001263   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:32.001299   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:32.038474   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:32.038514   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:32.052890   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:32.052925   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:32.157895   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:32.150135    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.150798    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152292    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152811    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.154388    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:32.150135    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.150798    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152292    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152811    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.154388    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:32.157991   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:32.158015   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:32.202276   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:32.202312   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:32.246886   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:32.246920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:34.774920   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:34.785509   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:34.785619   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:34.817587   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:34.817656   59960 cri.go:89] found id: ""
	I1126 20:11:34.817682   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:34.817753   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.821524   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:34.821594   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:34.849130   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:34.849154   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:34.849159   59960 cri.go:89] found id: ""
	I1126 20:11:34.849167   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:34.849233   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.852945   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.856601   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:34.856684   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:34.883375   59960 cri.go:89] found id: ""
	I1126 20:11:34.883398   59960 logs.go:282] 0 containers: []
	W1126 20:11:34.883412   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:34.883450   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:34.883524   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:34.909798   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:34.909821   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:34.909826   59960 cri.go:89] found id: ""
	I1126 20:11:34.909834   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:34.909888   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.913552   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.916964   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:34.917033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:34.949567   59960 cri.go:89] found id: ""
	I1126 20:11:34.949592   59960 logs.go:282] 0 containers: []
	W1126 20:11:34.949601   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:34.949608   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:34.949663   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:34.977128   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:34.977150   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:34.977156   59960 cri.go:89] found id: ""
	I1126 20:11:34.977163   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:34.977220   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.981001   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.984842   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:34.984957   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:35.012427   59960 cri.go:89] found id: ""
	I1126 20:11:35.012460   59960 logs.go:282] 0 containers: []
	W1126 20:11:35.012470   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:35.012479   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:35.012493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:35.040355   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:35.040396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:35.085028   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:35.085064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:35.113614   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:35.113649   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:35.153880   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:35.153911   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:35.198643   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:35.198675   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:35.268315   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:35.268350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:35.295776   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:35.295804   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:35.376804   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:35.376847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:35.482429   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:35.482467   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:35.495585   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:35.495620   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:35.570301   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:35.562818    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.563633    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565195    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565472    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.566934    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:35.562818    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.563633    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565195    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565472    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.566934    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:35.570323   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:35.570336   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.104089   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:38.117181   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:38.117256   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:38.149986   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:38.150007   59960 cri.go:89] found id: ""
	I1126 20:11:38.150015   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:38.150071   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.153769   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:38.153836   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:38.181424   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:38.181445   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:38.181450   59960 cri.go:89] found id: ""
	I1126 20:11:38.181457   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:38.181514   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.186065   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.189965   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:38.190088   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:38.222377   59960 cri.go:89] found id: ""
	I1126 20:11:38.222403   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.222412   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:38.222418   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:38.222512   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:38.251289   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:38.251308   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:38.251312   59960 cri.go:89] found id: ""
	I1126 20:11:38.251319   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:38.251376   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.256455   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.260117   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:38.260191   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:38.285970   59960 cri.go:89] found id: ""
	I1126 20:11:38.285993   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.286001   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:38.286007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:38.286071   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:38.316333   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.316352   59960 cri.go:89] found id: ""
	I1126 20:11:38.316360   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:38.316418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.320056   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:38.320141   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:38.346321   59960 cri.go:89] found id: ""
	I1126 20:11:38.346343   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.346355   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:38.346365   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:38.346377   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:38.373397   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:38.373424   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:38.425362   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:38.425395   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.453015   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:38.453091   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:38.532623   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:38.532697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:38.633361   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:38.633397   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:38.645846   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:38.645873   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:38.703411   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:38.703444   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:38.767512   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:38.767547   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:38.796976   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:38.797004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:38.829009   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:38.829038   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:38.898466   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:38.890004    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.890695    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892444    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892921    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.894201    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:38.890004    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.890695    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892444    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892921    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.894201    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:41.398722   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:41.410132   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:41.410201   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:41.438116   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:41.438139   59960 cri.go:89] found id: ""
	I1126 20:11:41.438148   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:41.438205   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.442017   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:41.442090   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:41.469903   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:41.469958   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:41.469963   59960 cri.go:89] found id: ""
	I1126 20:11:41.469970   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:41.470027   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.474067   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.478045   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:41.478121   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:41.505356   59960 cri.go:89] found id: ""
	I1126 20:11:41.505421   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.505446   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:41.505473   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:41.505547   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:41.539013   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:41.539078   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:41.539097   59960 cri.go:89] found id: ""
	I1126 20:11:41.539120   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:41.539192   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.545082   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.548706   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:41.548780   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:41.575834   59960 cri.go:89] found id: ""
	I1126 20:11:41.575859   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.575867   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:41.575874   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:41.575934   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:41.611347   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:41.611373   59960 cri.go:89] found id: ""
	I1126 20:11:41.611381   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:41.611452   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.615789   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:41.615865   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:41.641022   59960 cri.go:89] found id: ""
	I1126 20:11:41.641047   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.641057   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:41.641066   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:41.641078   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:41.742347   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:41.742381   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:41.754134   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:41.754164   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:41.831601   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:41.821574    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.822287    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.823756    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.824699    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.826433    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:41.821574    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.822287    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.823756    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.824699    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.826433    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:41.831624   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:41.831637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:41.860096   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:41.860125   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:41.910250   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:41.910285   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:41.980123   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:41.980161   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:42.010802   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:42.010829   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:42.106028   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:42.106070   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:42.164514   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:42.164559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:42.271103   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:42.271151   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:44.839838   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:44.850546   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:44.850618   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:44.876918   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:44.876988   59960 cri.go:89] found id: ""
	I1126 20:11:44.877011   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:44.877094   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.881043   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:44.881125   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:44.911219   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:44.911239   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:44.911243   59960 cri.go:89] found id: ""
	I1126 20:11:44.911250   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:44.911304   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.914984   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.918517   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:44.918591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:44.948367   59960 cri.go:89] found id: ""
	I1126 20:11:44.948393   59960 logs.go:282] 0 containers: []
	W1126 20:11:44.948403   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:44.948410   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:44.948488   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:44.979725   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:44.979749   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:44.979762   59960 cri.go:89] found id: ""
	I1126 20:11:44.979770   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:44.979825   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.983672   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.987318   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:44.987393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:45.013302   59960 cri.go:89] found id: ""
	I1126 20:11:45.013326   59960 logs.go:282] 0 containers: []
	W1126 20:11:45.013335   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:45.013342   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:45.013400   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:45.055627   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:45.055649   59960 cri.go:89] found id: ""
	I1126 20:11:45.055657   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:45.055726   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:45.085558   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:45.085645   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:45.151023   59960 cri.go:89] found id: ""
	I1126 20:11:45.151097   59960 logs.go:282] 0 containers: []
	W1126 20:11:45.151125   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:45.151149   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:45.151189   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:45.299197   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:45.299495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:45.414522   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:45.414561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:45.426305   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:45.426334   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:45.498361   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:45.490138    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.490855    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.492369    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.493032    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.494581    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:45.490138    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.490855    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.492369    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.493032    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.494581    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:45.498385   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:45.498406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:45.544282   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:45.544315   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:45.572601   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:45.572628   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:45.618675   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:45.618704   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:45.644699   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:45.644729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:45.692766   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:45.692847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:45.768264   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:45.768298   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.298071   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:48.309786   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:48.309955   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:48.338906   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:48.338929   59960 cri.go:89] found id: ""
	I1126 20:11:48.338938   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:48.339013   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.342703   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:48.342807   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:48.373459   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:48.373483   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:48.373489   59960 cri.go:89] found id: ""
	I1126 20:11:48.373497   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:48.373571   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.377243   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.380907   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:48.380978   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:48.410171   59960 cri.go:89] found id: ""
	I1126 20:11:48.410194   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.410203   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:48.410210   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:48.410269   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:48.438118   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:48.438141   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.438146   59960 cri.go:89] found id: ""
	I1126 20:11:48.438153   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:48.438208   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.441706   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.445239   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:48.445331   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:48.471795   59960 cri.go:89] found id: ""
	I1126 20:11:48.471818   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.471827   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:48.471834   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:48.471894   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:48.499373   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:48.499444   59960 cri.go:89] found id: ""
	I1126 20:11:48.499459   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:48.499520   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.503413   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:48.503486   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:48.530399   59960 cri.go:89] found id: ""
	I1126 20:11:48.530421   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.530435   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:48.530450   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:48.530464   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:48.571849   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:48.571882   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:48.658179   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:48.658279   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:48.689018   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:48.689045   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:48.763174   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:48.763207   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:48.778567   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:48.778596   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:48.827328   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:48.827365   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.857288   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:48.857365   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:48.888507   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:48.888539   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:48.988930   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:48.988967   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:49.069225   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:49.055449    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.056233    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.057886    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.058530    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.060083    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:49.055449    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.056233    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.057886    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.058530    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.060083    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:49.069248   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:49.069262   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:51.595258   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:51.606745   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:51.606819   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:51.636395   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:51.636416   59960 cri.go:89] found id: ""
	I1126 20:11:51.636430   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:51.636488   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.640040   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:51.640115   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:51.676792   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:51.676812   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:51.676816   59960 cri.go:89] found id: ""
	I1126 20:11:51.676824   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:51.676877   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.681110   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.685068   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:51.685183   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:51.720013   59960 cri.go:89] found id: ""
	I1126 20:11:51.720038   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.720047   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:51.720054   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:51.720111   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:51.748336   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:51.748360   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:51.748375   59960 cri.go:89] found id: ""
	I1126 20:11:51.748383   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:51.748439   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.752267   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.756170   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:51.756241   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:51.783057   59960 cri.go:89] found id: ""
	I1126 20:11:51.783086   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.783095   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:51.783101   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:51.783163   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:51.811250   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:51.811272   59960 cri.go:89] found id: ""
	I1126 20:11:51.811282   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:51.811338   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.815120   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:51.815232   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:51.846026   59960 cri.go:89] found id: ""
	I1126 20:11:51.846049   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.846064   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:51.846074   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:51.846086   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:51.890348   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:51.890380   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:51.920851   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:51.920922   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:51.977107   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:51.977140   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:52.060932   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:52.060981   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:52.093050   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:52.093078   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:52.176431   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:52.176468   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:52.215980   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:52.216012   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:52.327858   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:52.327901   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:52.340252   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:52.340285   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:52.418993   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:52.410090    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.410776    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.412508    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.413095    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.414685    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:52.410090    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.410776    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.412508    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.413095    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.414685    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:52.419016   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:52.419029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:54.944539   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:54.955542   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:54.955615   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:54.986048   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:54.986074   59960 cri.go:89] found id: ""
	I1126 20:11:54.986083   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:54.986139   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:54.989757   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:54.989829   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:55.016053   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:55.016085   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:55.016091   59960 cri.go:89] found id: ""
	I1126 20:11:55.016099   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:55.016174   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.019787   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.023250   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:55.023321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:55.069450   59960 cri.go:89] found id: ""
	I1126 20:11:55.069473   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.069482   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:55.069489   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:55.069572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:55.098641   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:55.098664   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:55.098669   59960 cri.go:89] found id: ""
	I1126 20:11:55.098676   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:55.098732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.102435   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.106227   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:55.106351   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:55.138121   59960 cri.go:89] found id: ""
	I1126 20:11:55.138145   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.138154   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:55.138174   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:55.138236   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:55.167513   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:55.167544   59960 cri.go:89] found id: ""
	I1126 20:11:55.167553   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:55.167618   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.171313   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:55.171381   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:55.202786   59960 cri.go:89] found id: ""
	I1126 20:11:55.202813   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.202822   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:55.202832   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:55.202866   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:55.302444   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:55.302521   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:55.340281   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:55.340307   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:55.380642   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:55.380671   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:55.413529   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:55.413559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:55.441562   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:55.441590   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:55.518521   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:55.518561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:55.558444   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:55.558478   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:55.571280   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:55.571312   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:55.640808   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:55.631279    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.631827    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.633724    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.634294    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.636622    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:55.631279    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.631827    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.633724    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.634294    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.636622    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:55.640840   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:55.640855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:55.687489   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:55.687525   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.274871   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:58.285429   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:58.285499   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:58.313375   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:58.313399   59960 cri.go:89] found id: ""
	I1126 20:11:58.313406   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:58.313459   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.316973   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:58.317046   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:58.343195   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:58.343222   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:58.343233   59960 cri.go:89] found id: ""
	I1126 20:11:58.343241   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:58.343299   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.346903   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.350464   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:58.350532   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:58.389630   59960 cri.go:89] found id: ""
	I1126 20:11:58.389651   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.389659   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:58.389666   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:58.389727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:58.417327   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.417347   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:58.417351   59960 cri.go:89] found id: ""
	I1126 20:11:58.417358   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:58.417415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.421999   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.425800   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:58.425864   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:58.452945   59960 cri.go:89] found id: ""
	I1126 20:11:58.452969   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.452977   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:58.452983   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:58.453043   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:58.488167   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:58.488198   59960 cri.go:89] found id: ""
	I1126 20:11:58.488207   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:58.488290   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.492158   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:58.492254   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:58.519792   59960 cri.go:89] found id: ""
	I1126 20:11:58.519815   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.519824   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:58.519833   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:58.519845   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:58.539152   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:58.539178   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:58.611844   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:58.602656    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.604433    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.605264    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.606165    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.607783    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:58.602656    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.604433    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.605264    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.606165    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.607783    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:58.611916   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:58.611936   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:58.653684   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:58.653755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:58.701629   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:58.701698   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:58.797678   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:58.797712   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:58.826943   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:58.826971   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:58.870347   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:58.870382   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.935086   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:58.935124   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:58.968825   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:58.968856   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:58.997914   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:58.998030   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:01.577720   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:01.589568   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:01.589642   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:01.621435   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:01.621457   59960 cri.go:89] found id: ""
	I1126 20:12:01.621466   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:01.621521   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.625557   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:01.625630   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:01.653424   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:01.653447   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:01.653452   59960 cri.go:89] found id: ""
	I1126 20:12:01.653459   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:01.653520   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.658113   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.663163   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:01.663279   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:01.690617   59960 cri.go:89] found id: ""
	I1126 20:12:01.690692   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.690707   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:01.690714   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:01.690776   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:01.721669   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:01.721691   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:01.721696   59960 cri.go:89] found id: ""
	I1126 20:12:01.721705   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:01.721760   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.725774   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.729528   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:01.729608   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:01.755428   59960 cri.go:89] found id: ""
	I1126 20:12:01.755452   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.755461   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:01.755468   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:01.755529   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:01.783818   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:01.783841   59960 cri.go:89] found id: ""
	I1126 20:12:01.783849   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:01.783905   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.787656   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:01.787726   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:01.815958   59960 cri.go:89] found id: ""
	I1126 20:12:01.816025   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.816050   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:01.816067   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:01.816080   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:01.867560   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:01.867592   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:01.932205   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:01.932256   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:02.002408   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:02.002441   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:02.051577   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:02.051612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:02.088918   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:02.088948   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:02.168080   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:02.158735    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.159253    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162045    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162706    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.164462    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:02.158735    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.159253    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162045    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162706    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.164462    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:02.168105   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:02.168119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:02.244385   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:02.244435   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:02.282263   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:02.282293   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:02.383774   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:02.383810   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:02.399682   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:02.399712   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:04.928429   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:04.939418   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:04.939502   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:04.967318   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:04.967344   59960 cri.go:89] found id: ""
	I1126 20:12:04.967352   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:04.967406   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:04.971172   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:04.971242   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:04.998636   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:04.998660   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:04.998666   59960 cri.go:89] found id: ""
	I1126 20:12:04.998673   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:04.998728   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.002734   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.006234   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:05.006304   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:05.031905   59960 cri.go:89] found id: ""
	I1126 20:12:05.031931   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.031948   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:05.031954   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:05.032022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:05.062024   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:05.062047   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:05.062053   59960 cri.go:89] found id: ""
	I1126 20:12:05.062061   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:05.062119   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.066633   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.070769   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:05.070894   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:05.098088   59960 cri.go:89] found id: ""
	I1126 20:12:05.098113   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.098123   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:05.098130   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:05.098213   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:05.131371   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:05.131394   59960 cri.go:89] found id: ""
	I1126 20:12:05.131403   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:05.131477   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.135270   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:05.135372   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:05.162342   59960 cri.go:89] found id: ""
	I1126 20:12:05.162365   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.162374   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:05.162383   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:05.162395   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:05.235501   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:05.227170    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.227750    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229253    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229720    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.231198    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:05.227170    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.227750    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229253    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229720    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.231198    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:05.235522   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:05.235536   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:05.263102   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:05.263128   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:05.302111   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:05.302144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:05.333187   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:05.333216   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:05.359477   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:05.359505   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:05.438760   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:05.438798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:05.451777   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:05.451807   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:05.498508   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:05.498543   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:05.568808   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:05.568843   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:05.616879   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:05.616909   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:08.220414   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:08.231126   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:08.231199   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:08.258035   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:08.258105   59960 cri.go:89] found id: ""
	I1126 20:12:08.258125   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:08.258192   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.262176   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:08.262249   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:08.289710   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:08.289733   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:08.289739   59960 cri.go:89] found id: ""
	I1126 20:12:08.289750   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:08.289805   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.293485   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.297802   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:08.297880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:08.327209   59960 cri.go:89] found id: ""
	I1126 20:12:08.327234   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.327243   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:08.327263   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:08.327336   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:08.357819   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:08.357840   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:08.357845   59960 cri.go:89] found id: ""
	I1126 20:12:08.357852   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:08.357906   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.361705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.365237   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:08.365328   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:08.394319   59960 cri.go:89] found id: ""
	I1126 20:12:08.394383   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.394399   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:08.394406   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:08.394480   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:08.420463   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:08.420527   59960 cri.go:89] found id: ""
	I1126 20:12:08.420553   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:08.420638   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.424335   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:08.424450   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:08.452961   59960 cri.go:89] found id: ""
	I1126 20:12:08.452986   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.452995   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:08.453003   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:08.453014   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:08.493988   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:08.494022   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:08.544465   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:08.544499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:08.574385   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:08.574413   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:08.586334   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:08.586371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:08.667454   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:08.650997    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.659303    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.660307    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662037    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662374    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:08.650997    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.659303    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.660307    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662037    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662374    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:08.667486   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:08.667499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:08.699349   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:08.699378   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:08.764949   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:08.764985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:08.796757   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:08.796785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:08.880624   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:08.880660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:08.914640   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:08.914667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:11.513808   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:11.524482   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:11.524580   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:11.558859   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:11.558902   59960 cri.go:89] found id: ""
	I1126 20:12:11.558911   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:11.558970   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.562673   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:11.562747   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:11.588932   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:11.588951   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:11.588956   59960 cri.go:89] found id: ""
	I1126 20:12:11.588963   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:11.589017   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.592810   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.596570   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:11.596643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:11.623065   59960 cri.go:89] found id: ""
	I1126 20:12:11.623145   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.623161   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:11.623169   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:11.623229   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:11.650581   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:11.650605   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:11.650610   59960 cri.go:89] found id: ""
	I1126 20:12:11.650618   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:11.650671   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.655559   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.659747   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:11.659817   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:11.687296   59960 cri.go:89] found id: ""
	I1126 20:12:11.687322   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.687331   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:11.687337   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:11.687396   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:11.720511   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:11.720579   59960 cri.go:89] found id: ""
	I1126 20:12:11.720617   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:11.720708   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.724437   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:11.724506   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:11.749548   59960 cri.go:89] found id: ""
	I1126 20:12:11.749582   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.749591   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:11.749601   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:11.749612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:11.844417   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:11.844451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:11.856841   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:11.856870   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:11.927039   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:11.919031    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.919434    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921013    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921770    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.923409    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:11.919031    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.919434    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921013    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921770    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.923409    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:11.927072   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:11.927085   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:11.952749   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:11.952778   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:11.979828   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:11.979854   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:12.054969   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:12.055007   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:12.096829   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:12.096861   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:12.139040   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:12.139073   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:12.188630   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:12.188665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:12.261491   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:12.261525   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:14.793314   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:14.805690   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:14.805792   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:14.834480   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:14.834550   59960 cri.go:89] found id: ""
	I1126 20:12:14.834563   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:14.834624   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.838451   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:14.838546   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:14.865258   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:14.865280   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:14.865288   59960 cri.go:89] found id: ""
	I1126 20:12:14.865296   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:14.865369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.869042   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.872598   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:14.872673   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:14.899453   59960 cri.go:89] found id: ""
	I1126 20:12:14.899475   59960 logs.go:282] 0 containers: []
	W1126 20:12:14.899484   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:14.899491   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:14.899553   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:14.927802   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:14.927830   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:14.927837   59960 cri.go:89] found id: ""
	I1126 20:12:14.927845   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:14.927940   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.932558   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.936133   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:14.936204   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:14.961102   59960 cri.go:89] found id: ""
	I1126 20:12:14.961173   59960 logs.go:282] 0 containers: []
	W1126 20:12:14.961195   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:14.961215   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:14.961302   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:15.002363   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:15.002384   59960 cri.go:89] found id: ""
	I1126 20:12:15.002393   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:15.002447   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:15.006142   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:15.006212   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:15.032134   59960 cri.go:89] found id: ""
	I1126 20:12:15.032199   59960 logs.go:282] 0 containers: []
	W1126 20:12:15.032214   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:15.032224   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:15.032240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:15.081347   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:15.081379   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:15.180623   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:15.180658   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:15.209901   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:15.209962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:15.262607   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:15.262636   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:15.288510   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:15.288544   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:15.367680   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:15.367714   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:15.412204   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:15.412231   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:15.424270   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:15.424300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:15.503073   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:15.494667    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.495283    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.496993    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.497515    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.498972    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:15.494667    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.495283    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.496993    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.497515    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.498972    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:15.503139   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:15.503167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:15.550262   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:15.550296   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.118444   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:18.129864   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:18.129981   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:18.156819   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:18.156838   59960 cri.go:89] found id: ""
	I1126 20:12:18.156846   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:18.156904   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.161071   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:18.161149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:18.189616   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:18.189639   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:18.189644   59960 cri.go:89] found id: ""
	I1126 20:12:18.189651   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:18.189705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.193599   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.197622   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:18.197702   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:18.229000   59960 cri.go:89] found id: ""
	I1126 20:12:18.229024   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.229034   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:18.229041   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:18.229097   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:18.258704   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.258728   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:18.258734   59960 cri.go:89] found id: ""
	I1126 20:12:18.258741   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:18.258799   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.262617   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.266630   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:18.266703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:18.294498   59960 cri.go:89] found id: ""
	I1126 20:12:18.294520   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.294528   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:18.294535   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:18.294592   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:18.321461   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:18.321534   59960 cri.go:89] found id: ""
	I1126 20:12:18.321556   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:18.321645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.325350   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:18.325460   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:18.351492   59960 cri.go:89] found id: ""
	I1126 20:12:18.351553   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.351579   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:18.351599   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:18.351637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:18.407171   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:18.407205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:18.439080   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:18.439112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:18.547958   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:18.547995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:18.619721   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:18.609846    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.610654    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612119    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612768    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.614366    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:18.609846    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.610654    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612119    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612768    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.614366    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:18.619742   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:18.619754   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:18.645098   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:18.645177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:18.682606   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:18.682639   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.763422   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:18.763453   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:18.795735   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:18.795762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:18.822004   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:18.822035   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:18.896691   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:18.896727   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:21.410083   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:21.420840   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:21.420938   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:21.446994   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:21.447016   59960 cri.go:89] found id: ""
	I1126 20:12:21.447024   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:21.447102   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.450650   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:21.450721   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:21.479530   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:21.479554   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:21.479559   59960 cri.go:89] found id: ""
	I1126 20:12:21.479566   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:21.479639   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.483856   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.487301   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:21.487396   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:21.514632   59960 cri.go:89] found id: ""
	I1126 20:12:21.514655   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.514664   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:21.514677   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:21.514734   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:21.552676   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:21.552697   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:21.552701   59960 cri.go:89] found id: ""
	I1126 20:12:21.552708   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:21.552764   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.558562   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.562503   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:21.562570   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:21.592027   59960 cri.go:89] found id: ""
	I1126 20:12:21.592051   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.592059   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:21.592065   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:21.592122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:21.622050   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:21.622069   59960 cri.go:89] found id: ""
	I1126 20:12:21.622078   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:21.622133   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.625979   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:21.626057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:21.659506   59960 cri.go:89] found id: ""
	I1126 20:12:21.659530   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.659539   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:21.659548   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:21.659561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:21.692379   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:21.692406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:21.765021   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:21.765055   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:21.839116   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:21.830975    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.831759    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833349    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833904    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.835476    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:21.830975    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.831759    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833349    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833904    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.835476    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:21.839140   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:21.839153   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:21.865386   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:21.865413   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:21.904223   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:21.904257   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:21.949513   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:21.949545   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:21.975811   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:21.975838   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:22.009804   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:22.009830   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:22.114067   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:22.114107   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:22.129823   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:22.129850   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:24.699777   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:24.710717   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:24.710835   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:24.737361   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:24.737395   59960 cri.go:89] found id: ""
	I1126 20:12:24.737404   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:24.737467   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.741100   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:24.741181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:24.766942   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:24.767005   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:24.767023   59960 cri.go:89] found id: ""
	I1126 20:12:24.767038   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:24.767117   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.771423   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.775599   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:24.775679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:24.807211   59960 cri.go:89] found id: ""
	I1126 20:12:24.807238   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.807247   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:24.807254   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:24.807313   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:24.839448   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:24.839474   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:24.839480   59960 cri.go:89] found id: ""
	I1126 20:12:24.839487   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:24.839543   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.843345   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.846785   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:24.846859   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:24.875974   59960 cri.go:89] found id: ""
	I1126 20:12:24.875999   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.876008   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:24.876015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:24.876074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:24.904623   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:24.904646   59960 cri.go:89] found id: ""
	I1126 20:12:24.904655   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:24.904729   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.908536   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:24.908626   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:24.937367   59960 cri.go:89] found id: ""
	I1126 20:12:24.937448   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.937471   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:24.937494   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:24.937534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:24.976827   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:24.976864   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:25.024594   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:25.024629   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:25.103663   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:25.103701   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:25.184899   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:25.184934   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:25.288663   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:25.288696   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:25.303312   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:25.303340   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:25.371319   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:25.361818    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.362509    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.364256    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.365013    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.366870    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:25.361818    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.362509    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.364256    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.365013    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.366870    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:25.371342   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:25.371357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:25.399886   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:25.399954   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:25.431130   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:25.431162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:25.457679   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:25.457758   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:27.990400   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:28.001290   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:28.001359   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:28.027402   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:28.027424   59960 cri.go:89] found id: ""
	I1126 20:12:28.027441   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:28.027501   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.030992   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:28.031083   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:28.072993   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:28.073014   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:28.073019   59960 cri.go:89] found id: ""
	I1126 20:12:28.073026   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:28.073084   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.076846   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.080628   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:28.080762   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:28.107876   59960 cri.go:89] found id: ""
	I1126 20:12:28.107902   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.107911   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:28.107918   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:28.107993   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:28.135277   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:28.135299   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:28.135305   59960 cri.go:89] found id: ""
	I1126 20:12:28.135312   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:28.135369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.139340   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.143115   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:28.143193   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:28.179129   59960 cri.go:89] found id: ""
	I1126 20:12:28.179230   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.179259   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:28.179273   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:28.179346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:28.208432   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:28.208453   59960 cri.go:89] found id: ""
	I1126 20:12:28.208465   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:28.208523   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.212104   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:28.212174   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:28.239214   59960 cri.go:89] found id: ""
	I1126 20:12:28.239290   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.239307   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:28.239317   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:28.239331   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:28.311306   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:28.311342   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:28.340943   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:28.340972   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:28.376088   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:28.376113   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:28.447578   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:28.440425    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.440837    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442342    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442644    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.444078    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:28.440425    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.440837    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442342    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442644    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.444078    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:28.447601   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:28.447613   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:28.494672   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:28.494707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:28.524817   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:28.524847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:28.611534   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:28.611568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:28.717586   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:28.717621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:28.729869   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:28.729894   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:28.755777   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:28.755805   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.304943   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:31.316121   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:31.316189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:31.344914   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:31.344936   59960 cri.go:89] found id: ""
	I1126 20:12:31.344945   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:31.345000   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.348636   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:31.348708   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:31.376592   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.376614   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:31.376623   59960 cri.go:89] found id: ""
	I1126 20:12:31.376630   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:31.376683   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.380757   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.384468   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:31.384545   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:31.415544   59960 cri.go:89] found id: ""
	I1126 20:12:31.415570   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.415579   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:31.415586   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:31.415646   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:31.441604   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:31.441680   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:31.441699   59960 cri.go:89] found id: ""
	I1126 20:12:31.441723   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:31.441808   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.445590   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.449159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:31.449233   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:31.475467   59960 cri.go:89] found id: ""
	I1126 20:12:31.475492   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.475501   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:31.475507   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:31.475567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:31.505974   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:31.505995   59960 cri.go:89] found id: ""
	I1126 20:12:31.506004   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:31.506068   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.510913   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:31.510988   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:31.555870   59960 cri.go:89] found id: ""
	I1126 20:12:31.555901   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.555911   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:31.555920   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:31.555932   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:31.569317   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:31.569396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:31.639071   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:31.630335    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.631132    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.632992    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.633425    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.635012    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:31.630335    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.631132    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.632992    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.633425    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.635012    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:31.639141   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:31.639171   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:31.685122   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:31.685156   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:31.715735   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:31.715763   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:31.744469   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:31.744499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.782788   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:31.782822   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:31.854784   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:31.854820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:31.883960   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:31.883989   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:31.968197   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:31.968235   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:32.000618   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:32.000646   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:34.599812   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:34.610580   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:34.610690   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:34.643812   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:34.643835   59960 cri.go:89] found id: ""
	I1126 20:12:34.643844   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:34.643902   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.647819   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:34.647891   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:34.681825   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:34.681849   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:34.681855   59960 cri.go:89] found id: ""
	I1126 20:12:34.681863   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:34.681959   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.685589   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.689208   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:34.689280   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:34.719704   59960 cri.go:89] found id: ""
	I1126 20:12:34.719727   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.719736   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:34.719743   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:34.719802   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:34.745609   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:34.745632   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:34.745639   59960 cri.go:89] found id: ""
	I1126 20:12:34.745646   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:34.745704   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.749369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.752915   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:34.752982   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:34.778956   59960 cri.go:89] found id: ""
	I1126 20:12:34.778982   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.778996   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:34.779003   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:34.779059   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:34.805123   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:34.805146   59960 cri.go:89] found id: ""
	I1126 20:12:34.805153   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:34.805211   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.808760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:34.808834   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:34.834427   59960 cri.go:89] found id: ""
	I1126 20:12:34.834452   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.834462   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:34.834471   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:34.834482   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:34.912760   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:34.912792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:35.015751   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:35.015790   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:35.046216   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:35.046291   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:35.092725   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:35.092760   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:35.163096   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:35.163130   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:35.191405   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:35.191488   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:35.227181   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:35.227213   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:35.240889   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:35.240922   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:35.311849   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:35.302602    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.303934    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.304899    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.306705    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.307280    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:35.302602    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.303934    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.304899    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.306705    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.307280    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:35.311871   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:35.311884   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:35.356916   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:35.356951   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:37.883250   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:37.894052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:37.894122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:37.924918   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:37.924943   59960 cri.go:89] found id: ""
	I1126 20:12:37.924956   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:37.925020   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.928865   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:37.928940   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:37.961907   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:37.961958   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:37.961964   59960 cri.go:89] found id: ""
	I1126 20:12:37.961971   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:37.962035   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.965843   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.969339   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:37.969409   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:37.995343   59960 cri.go:89] found id: ""
	I1126 20:12:37.995373   59960 logs.go:282] 0 containers: []
	W1126 20:12:37.995381   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:37.995388   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:37.995491   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:38.022312   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:38.022334   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:38.022339   59960 cri.go:89] found id: ""
	I1126 20:12:38.022346   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:38.022413   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.026080   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.029533   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:38.029622   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:38.060280   59960 cri.go:89] found id: ""
	I1126 20:12:38.060307   59960 logs.go:282] 0 containers: []
	W1126 20:12:38.060346   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:38.060368   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:38.060437   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:38.091248   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:38.091312   59960 cri.go:89] found id: ""
	I1126 20:12:38.091327   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:38.091425   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.095836   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:38.095914   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:38.125378   59960 cri.go:89] found id: ""
	I1126 20:12:38.125403   59960 logs.go:282] 0 containers: []
	W1126 20:12:38.125413   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:38.125422   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:38.125436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:38.151847   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:38.151875   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:38.202356   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:38.202391   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:38.247650   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:38.247725   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:38.275709   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:38.275736   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:38.307514   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:38.307542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:38.404957   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:38.404994   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:38.491924   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:38.491962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:38.521423   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:38.521460   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:38.598021   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:38.598053   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:38.610973   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:38.611004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:38.687841   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:38.679705   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.680686   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.681793   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.682498   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.684162   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:38.679705   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.680686   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.681793   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.682498   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.684162   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:41.188401   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:41.199011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:41.199080   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:41.227170   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:41.227196   59960 cri.go:89] found id: ""
	I1126 20:12:41.227205   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:41.227260   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.230873   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:41.230945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:41.257484   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:41.257506   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:41.257522   59960 cri.go:89] found id: ""
	I1126 20:12:41.257529   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:41.257584   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.261286   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.265036   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:41.265101   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:41.290579   59960 cri.go:89] found id: ""
	I1126 20:12:41.290645   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.290669   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:41.290682   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:41.290741   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:41.319766   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:41.319786   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:41.319791   59960 cri.go:89] found id: ""
	I1126 20:12:41.319799   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:41.319859   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.323637   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.327077   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:41.327177   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:41.356676   59960 cri.go:89] found id: ""
	I1126 20:12:41.356702   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.356711   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:41.356719   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:41.356783   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:41.385771   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:41.385790   59960 cri.go:89] found id: ""
	I1126 20:12:41.385798   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:41.385852   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.389446   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:41.389544   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:41.416642   59960 cri.go:89] found id: ""
	I1126 20:12:41.416710   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.416732   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:41.416754   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:41.416788   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:41.482246   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:41.473419   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.474136   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.475824   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.476403   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.478152   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:41.473419   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.474136   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.475824   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.476403   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.478152   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:41.482311   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:41.482339   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:41.509950   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:41.510016   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:41.557291   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:41.557324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:41.584211   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:41.584240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:41.666177   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:41.666212   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:41.767334   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:41.767369   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:41.781064   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:41.781089   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:41.825285   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:41.825321   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:41.892538   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:41.892573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:41.920754   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:41.920785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:44.468280   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:44.479465   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:44.479546   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:44.507592   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:44.507615   59960 cri.go:89] found id: ""
	I1126 20:12:44.507623   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:44.507679   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.511422   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:44.511510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:44.543146   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:44.543169   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:44.543174   59960 cri.go:89] found id: ""
	I1126 20:12:44.543181   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:44.543251   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.547022   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.550639   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:44.550719   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:44.579025   59960 cri.go:89] found id: ""
	I1126 20:12:44.579054   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.579063   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:44.579070   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:44.579139   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:44.611309   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:44.611332   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:44.611336   59960 cri.go:89] found id: ""
	I1126 20:12:44.611344   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:44.611407   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.615332   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.619108   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:44.619183   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:44.645161   59960 cri.go:89] found id: ""
	I1126 20:12:44.645185   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.645194   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:44.645201   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:44.645257   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:44.684280   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:44.684301   59960 cri.go:89] found id: ""
	I1126 20:12:44.684310   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:44.684364   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.687985   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:44.688057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:44.713170   59960 cri.go:89] found id: ""
	I1126 20:12:44.713193   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.713202   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:44.713211   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:44.713225   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:44.790764   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:44.782647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.783505   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785179   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785579   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.787022   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:44.782647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.783505   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785179   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785579   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.787022   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:44.790787   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:44.790801   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:44.841911   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:44.842082   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:44.886124   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:44.886155   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:44.956783   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:44.956817   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:44.992805   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:44.992834   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:45.021163   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:45.021190   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:45.060873   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:45.061452   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:45.201027   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:45.201119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:45.266419   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:45.266547   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:45.415986   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:45.416024   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:47.928674   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:47.940771   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:47.940843   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:47.966175   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:47.966194   59960 cri.go:89] found id: ""
	I1126 20:12:47.966202   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:47.966254   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:47.969908   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:47.970011   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:47.997001   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:47.997027   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:47.997032   59960 cri.go:89] found id: ""
	I1126 20:12:47.997040   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:47.997096   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.001757   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.005881   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:48.005980   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:48.031565   59960 cri.go:89] found id: ""
	I1126 20:12:48.031587   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.031595   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:48.031602   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:48.031660   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:48.063357   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:48.063380   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:48.063386   59960 cri.go:89] found id: ""
	I1126 20:12:48.063393   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:48.063450   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.068044   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.073135   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:48.073260   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:48.103364   59960 cri.go:89] found id: ""
	I1126 20:12:48.103391   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.103401   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:48.103408   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:48.103511   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:48.134700   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:48.134720   59960 cri.go:89] found id: ""
	I1126 20:12:48.134728   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:48.134795   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.138489   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:48.138568   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:48.164615   59960 cri.go:89] found id: ""
	I1126 20:12:48.164639   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.164648   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:48.164657   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:48.164670   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:48.238206   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:48.238245   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:48.270325   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:48.270352   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:48.316632   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:48.316660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:48.328526   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:48.328554   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:48.370051   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:48.370081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:48.397236   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:48.397264   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:48.478994   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:48.479029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:48.586134   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:48.586167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:48.661172   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:48.650880   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.652436   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.653061   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.654717   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.655290   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:48.650880   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.652436   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.653061   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.654717   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.655290   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:48.661195   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:48.661211   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:48.689769   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:48.689797   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.235721   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:51.246961   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:51.247038   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:51.276386   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:51.276410   59960 cri.go:89] found id: ""
	I1126 20:12:51.276419   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:51.276472   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.280282   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:51.280363   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:51.307844   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:51.307875   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.307880   59960 cri.go:89] found id: ""
	I1126 20:12:51.307888   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:51.307944   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.311885   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.315516   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:51.315643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:51.343040   59960 cri.go:89] found id: ""
	I1126 20:12:51.343068   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.343077   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:51.343084   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:51.343144   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:51.371879   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:51.371901   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:51.371907   59960 cri.go:89] found id: ""
	I1126 20:12:51.371920   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:51.371976   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.375815   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.379444   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:51.379518   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:51.409590   59960 cri.go:89] found id: ""
	I1126 20:12:51.409615   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.409624   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:51.409630   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:51.409688   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:51.440665   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:51.440692   59960 cri.go:89] found id: ""
	I1126 20:12:51.440701   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:51.440756   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.444486   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:51.444565   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:51.470661   59960 cri.go:89] found id: ""
	I1126 20:12:51.470686   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.470695   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:51.470705   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:51.470749   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:51.482794   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:51.482823   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:51.570460   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:51.561457   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.562296   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.563970   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.564288   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.566409   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:51.561457   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.562296   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.563970   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.564288   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.566409   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:51.570484   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:51.570498   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:51.596696   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:51.596724   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:51.657780   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:51.657820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:51.736300   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:51.736338   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:51.772635   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:51.772664   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:51.808014   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:51.808042   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:51.909775   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:51.909814   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.955849   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:51.955887   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:51.986011   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:51.986040   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:54.569991   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:54.582000   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:54.582074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:54.610486   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:54.610506   59960 cri.go:89] found id: ""
	I1126 20:12:54.610515   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:54.610573   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.614711   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:54.614787   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:54.641548   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:54.641571   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:54.641577   59960 cri.go:89] found id: ""
	I1126 20:12:54.641584   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:54.641645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.645430   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.649375   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:54.649465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:54.677350   59960 cri.go:89] found id: ""
	I1126 20:12:54.677377   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.677386   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:54.677399   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:54.677456   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:54.706226   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:54.706249   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:54.706254   59960 cri.go:89] found id: ""
	I1126 20:12:54.706261   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:54.706315   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.710188   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.713666   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:54.713759   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:54.745132   59960 cri.go:89] found id: ""
	I1126 20:12:54.745158   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.745167   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:54.745174   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:54.745235   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:54.774016   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:54.774039   59960 cri.go:89] found id: ""
	I1126 20:12:54.774047   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:54.774105   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.778220   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:54.778293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:54.807768   59960 cri.go:89] found id: ""
	I1126 20:12:54.807831   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.807845   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:54.807855   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:54.807867   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:54.904620   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:54.904657   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:54.931520   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:54.931548   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:54.974322   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:54.974360   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:55.010146   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:55.010176   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:55.044963   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:55.045006   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:55.060490   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:55.060520   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:55.132694   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:55.124286   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.124937   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.126610   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.127207   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.128929   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:55.124286   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.124937   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.126610   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.127207   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.128929   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:55.132729   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:55.132746   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:55.180103   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:55.180139   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:55.258117   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:55.258154   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:55.289687   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:55.289716   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:57.870076   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:57.881883   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:57.881978   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:57.911809   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:57.911833   59960 cri.go:89] found id: ""
	I1126 20:12:57.911841   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:57.911899   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.915590   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:57.915685   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:57.943647   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:57.943671   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:57.943677   59960 cri.go:89] found id: ""
	I1126 20:12:57.943684   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:57.943747   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.947699   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.951409   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:57.951489   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:57.979114   59960 cri.go:89] found id: ""
	I1126 20:12:57.979138   59960 logs.go:282] 0 containers: []
	W1126 20:12:57.979147   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:57.979154   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:57.979214   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:58.009760   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:58.009781   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:58.009787   59960 cri.go:89] found id: ""
	I1126 20:12:58.009794   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:58.009855   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.013598   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.017135   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:58.017207   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:58.047222   59960 cri.go:89] found id: ""
	I1126 20:12:58.047247   59960 logs.go:282] 0 containers: []
	W1126 20:12:58.047255   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:58.047262   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:58.047324   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:58.094431   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:58.094510   59960 cri.go:89] found id: ""
	I1126 20:12:58.094524   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:58.094586   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.099004   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:58.099099   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:58.126698   59960 cri.go:89] found id: ""
	I1126 20:12:58.126727   59960 logs.go:282] 0 containers: []
	W1126 20:12:58.126735   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:58.126744   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:58.126756   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:58.155602   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:58.155629   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:58.196131   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:58.196166   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:58.243760   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:58.243793   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:58.314546   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:58.314583   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:58.347422   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:58.347451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:58.373247   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:58.373277   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:58.448488   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:58.448524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:58.480586   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:58.480615   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:58.586743   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:58.586799   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:58.600003   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:58.600029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:58.682648   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:58.673481   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.674315   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.675021   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.676838   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.677737   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:58.673481   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.674315   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.675021   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.676838   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.677737   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:01.183502   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:01.195046   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:01.195153   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:01.224257   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:01.224281   59960 cri.go:89] found id: ""
	I1126 20:13:01.224289   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:01.224365   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.228134   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:01.228206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:01.265990   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:01.266014   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:01.266019   59960 cri.go:89] found id: ""
	I1126 20:13:01.266027   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:01.266084   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.270682   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.274505   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:01.274580   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:01.302962   59960 cri.go:89] found id: ""
	I1126 20:13:01.302989   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.302998   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:01.303005   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:01.303072   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:01.335599   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:01.335621   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:01.335627   59960 cri.go:89] found id: ""
	I1126 20:13:01.335635   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:01.335689   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.339621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.343531   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:01.343614   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:01.369553   59960 cri.go:89] found id: ""
	I1126 20:13:01.369578   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.369588   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:01.369594   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:01.369657   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:01.402170   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:01.402197   59960 cri.go:89] found id: ""
	I1126 20:13:01.402205   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:01.402266   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.406260   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:01.406336   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:01.432250   59960 cri.go:89] found id: ""
	I1126 20:13:01.432326   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.432352   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:01.432362   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:01.432378   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:01.473457   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:01.473491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:01.525391   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:01.525445   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:01.557734   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:01.557765   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:01.650427   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:01.650465   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:01.696040   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:01.696070   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:01.801258   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:01.801297   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:01.872498   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:01.872534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:01.912672   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:01.912725   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:01.927976   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:01.928008   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:02.002577   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:01.992139   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.993221   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.994589   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996153   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996915   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:01.992139   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.993221   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.994589   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996153   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996915   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:02.002601   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:02.002614   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:04.532051   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:04.544501   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:04.544572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:04.571414   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:04.571435   59960 cri.go:89] found id: ""
	I1126 20:13:04.571443   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:04.571494   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.575072   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:04.575149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:04.603292   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:04.603312   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:04.603316   59960 cri.go:89] found id: ""
	I1126 20:13:04.603326   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:04.603378   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.607479   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.610889   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:04.610970   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:04.636626   59960 cri.go:89] found id: ""
	I1126 20:13:04.636652   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.636662   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:04.636668   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:04.636745   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:04.665487   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:04.665511   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:04.665516   59960 cri.go:89] found id: ""
	I1126 20:13:04.665523   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:04.665599   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.669516   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.673155   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:04.673221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:04.705848   59960 cri.go:89] found id: ""
	I1126 20:13:04.705873   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.705882   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:04.705888   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:04.705971   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:04.741254   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:04.741277   59960 cri.go:89] found id: ""
	I1126 20:13:04.741285   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:04.741340   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.745396   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:04.745469   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:04.777680   59960 cri.go:89] found id: ""
	I1126 20:13:04.777713   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.777723   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:04.777732   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:04.777744   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:04.884972   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:04.885008   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:04.898040   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:04.898066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:04.971530   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:04.971610   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:05.003493   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:05.003573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:05.082481   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:05.082515   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:05.116089   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:05.116119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:05.186979   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:05.178888   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.179664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181297   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.183205   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:05.178888   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.179664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181297   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.183205   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:05.187006   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:05.187020   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:05.214669   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:05.214698   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:05.261207   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:05.261238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:05.306449   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:05.306482   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:07.838042   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:07.850498   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:07.850567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:07.878108   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:07.878130   59960 cri.go:89] found id: ""
	I1126 20:13:07.878138   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:07.878197   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.882580   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:07.882654   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:07.911855   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:07.911886   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:07.911891   59960 cri.go:89] found id: ""
	I1126 20:13:07.911899   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:07.911960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.915705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.919300   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:07.919371   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:07.951018   59960 cri.go:89] found id: ""
	I1126 20:13:07.951044   59960 logs.go:282] 0 containers: []
	W1126 20:13:07.951053   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:07.951059   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:07.951119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:07.978929   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:07.978951   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:07.978956   59960 cri.go:89] found id: ""
	I1126 20:13:07.978963   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:07.979017   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.983189   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.986830   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:07.986903   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:08.016199   59960 cri.go:89] found id: ""
	I1126 20:13:08.016231   59960 logs.go:282] 0 containers: []
	W1126 20:13:08.016240   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:08.016251   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:08.016325   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:08.053456   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:08.053528   59960 cri.go:89] found id: ""
	I1126 20:13:08.053549   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:08.053644   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:08.057986   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:08.058066   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:08.087479   59960 cri.go:89] found id: ""
	I1126 20:13:08.087508   59960 logs.go:282] 0 containers: []
	W1126 20:13:08.087517   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:08.087533   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:08.087546   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:08.132468   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:08.132502   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:08.176740   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:08.176778   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:08.250131   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:08.250178   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:08.280307   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:08.280337   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:08.310477   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:08.310506   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:08.413610   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:08.413648   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:08.484512   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:08.474848   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.476074   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.477530   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.478182   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.479748   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:08.474848   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.476074   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.477530   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.478182   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.479748   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:08.484538   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:08.484551   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:08.561138   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:08.561172   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:08.596362   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:08.596439   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:08.609838   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:08.609909   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.136633   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:11.147922   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:11.148007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:11.179880   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.179915   59960 cri.go:89] found id: ""
	I1126 20:13:11.179923   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:11.180040   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.184887   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:11.184958   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:11.213848   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:11.213872   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:11.213878   59960 cri.go:89] found id: ""
	I1126 20:13:11.213885   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:11.213981   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.217804   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.221572   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:11.221649   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:11.258706   59960 cri.go:89] found id: ""
	I1126 20:13:11.258783   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.258799   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:11.258806   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:11.258880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:11.289663   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:11.289686   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:11.289692   59960 cri.go:89] found id: ""
	I1126 20:13:11.289699   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:11.289755   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.293522   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.298425   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:11.298504   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:11.325442   59960 cri.go:89] found id: ""
	I1126 20:13:11.325508   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.325534   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:11.325552   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:11.325636   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:11.352745   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:11.352808   59960 cri.go:89] found id: ""
	I1126 20:13:11.352834   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:11.352923   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.356710   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:11.356824   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:11.384378   59960 cri.go:89] found id: ""
	I1126 20:13:11.384402   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.384412   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:11.384421   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:11.384433   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:11.396869   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:11.396938   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:11.467278   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:11.459180   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.459948   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.461472   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.462000   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.463589   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:11.459180   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.459948   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.461472   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.462000   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.463589   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:11.467302   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:11.467316   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.494598   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:11.494626   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:11.533337   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:11.533372   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:11.559364   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:11.559392   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:11.642834   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:11.642873   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:11.680367   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:11.680393   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:11.784039   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:11.784075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:11.834225   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:11.834260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:11.905094   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:11.905129   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.439226   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:14.451155   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:14.451245   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:14.493752   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:14.493776   59960 cri.go:89] found id: ""
	I1126 20:13:14.493784   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:14.493840   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.497504   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:14.497627   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:14.524624   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:14.524646   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:14.524652   59960 cri.go:89] found id: ""
	I1126 20:13:14.524659   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:14.524743   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.528418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.532417   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:14.532512   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:14.559402   59960 cri.go:89] found id: ""
	I1126 20:13:14.559477   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.559491   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:14.559498   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:14.559556   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:14.588825   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:14.588848   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.588853   59960 cri.go:89] found id: ""
	I1126 20:13:14.588860   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:14.588921   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.593022   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.596763   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:14.596831   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:14.624835   59960 cri.go:89] found id: ""
	I1126 20:13:14.624858   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.624867   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:14.624874   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:14.624929   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:14.650771   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:14.650846   59960 cri.go:89] found id: ""
	I1126 20:13:14.650872   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:14.650960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.656095   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:14.656219   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:14.682420   59960 cri.go:89] found id: ""
	I1126 20:13:14.682493   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.682517   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:14.682540   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:14.682581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:14.722936   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:14.722971   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.754105   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:14.754134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:14.786128   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:14.786156   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:14.798341   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:14.798370   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:14.873270   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:14.865757   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.866349   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.867866   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.868348   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.869793   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:14.865757   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.866349   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.867866   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.868348   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.869793   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:14.873292   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:14.873306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:14.920206   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:14.920240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:14.996591   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:14.996624   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:15.024423   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:15.024451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:15.105848   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:15.105881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:15.205091   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:15.205170   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:17.734682   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:17.745326   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:17.745391   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:17.773503   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:17.773525   59960 cri.go:89] found id: ""
	I1126 20:13:17.773534   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:17.773621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.777326   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:17.777400   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:17.805117   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:17.805139   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:17.805144   59960 cri.go:89] found id: ""
	I1126 20:13:17.805151   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:17.805206   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.809065   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.812530   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:17.812601   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:17.841430   59960 cri.go:89] found id: ""
	I1126 20:13:17.841456   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.841465   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:17.841472   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:17.841530   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:17.868985   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:17.869009   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:17.869014   59960 cri.go:89] found id: ""
	I1126 20:13:17.869024   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:17.869081   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.882183   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.885701   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:17.885794   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:17.918849   59960 cri.go:89] found id: ""
	I1126 20:13:17.918872   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.918880   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:17.918887   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:17.918947   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:17.949773   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:17.949849   59960 cri.go:89] found id: ""
	I1126 20:13:17.949872   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:17.949996   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.953636   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:17.953705   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:17.980243   59960 cri.go:89] found id: ""
	I1126 20:13:17.980266   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.980275   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:17.980284   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:17.980295   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:18.011301   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:18.011331   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:18.038493   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:18.038526   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:18.080613   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:18.080641   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:18.160950   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:18.160988   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:18.262170   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:18.262215   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:18.275569   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:18.275593   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:18.351781   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:18.343534   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.344057   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.345769   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.346381   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.347931   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:18.343534   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.344057   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.345769   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.346381   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.347931   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:18.351805   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:18.351817   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:18.389344   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:18.389375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:18.434916   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:18.434949   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:18.527668   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:18.527702   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.058771   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:21.073274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:21.073339   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:21.121326   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:21.121345   59960 cri.go:89] found id: ""
	I1126 20:13:21.121356   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:21.121415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.130434   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:21.130507   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:21.164100   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:21.164161   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:21.164191   59960 cri.go:89] found id: ""
	I1126 20:13:21.164212   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:21.164289   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.168566   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.173217   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:21.173328   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:21.201882   59960 cri.go:89] found id: ""
	I1126 20:13:21.202006   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.202036   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:21.202055   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:21.202157   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:21.230033   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:21.230099   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:21.230120   59960 cri.go:89] found id: ""
	I1126 20:13:21.230144   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:21.230222   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.234188   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.238625   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:21.238709   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:21.266450   59960 cri.go:89] found id: ""
	I1126 20:13:21.266476   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.266485   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:21.266492   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:21.266567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:21.293192   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.293221   59960 cri.go:89] found id: ""
	I1126 20:13:21.293229   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:21.293320   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.297074   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:21.297146   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:21.325608   59960 cri.go:89] found id: ""
	I1126 20:13:21.325635   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.325644   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:21.325653   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:21.325665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:21.365168   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:21.365201   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:21.407809   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:21.407841   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:21.490502   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:21.490538   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:21.593562   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:21.593598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:21.620251   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:21.620280   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:21.696224   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:21.696260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:21.724295   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:21.724324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.754121   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:21.754146   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:21.785320   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:21.785347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:21.797528   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:21.797556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:21.871066   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:21.862248   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.863127   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.864832   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.865449   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.867089   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:21.862248   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.863127   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.864832   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.865449   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.867089   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:24.371542   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:24.382011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:24.382074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:24.413323   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:24.413351   59960 cri.go:89] found id: ""
	I1126 20:13:24.413360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:24.413418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.417248   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:24.417327   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:24.443549   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:24.443571   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:24.443576   59960 cri.go:89] found id: ""
	I1126 20:13:24.443583   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:24.443638   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.447448   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.450865   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:24.450933   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:24.481019   59960 cri.go:89] found id: ""
	I1126 20:13:24.481043   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.481052   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:24.481059   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:24.481119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:24.509327   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:24.509349   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:24.509354   59960 cri.go:89] found id: ""
	I1126 20:13:24.509361   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:24.509416   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.512867   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.516116   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:24.516181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:24.546284   59960 cri.go:89] found id: ""
	I1126 20:13:24.546361   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.546390   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:24.546405   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:24.546464   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:24.571968   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:24.572032   59960 cri.go:89] found id: ""
	I1126 20:13:24.572047   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:24.572113   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.575760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:24.575830   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:24.603299   59960 cri.go:89] found id: ""
	I1126 20:13:24.603325   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.603334   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:24.603373   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:24.603390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:24.642562   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:24.642595   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:24.696607   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:24.696640   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:24.724494   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:24.724523   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:24.805443   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:24.805477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:24.880673   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:24.872137   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.872936   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.874737   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.875329   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.876994   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:24.872137   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.872936   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.874737   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.875329   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.876994   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:24.880694   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:24.880708   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:24.912019   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:24.912047   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:24.998475   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:24.998511   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:25.027058   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:25.027084   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:25.060548   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:25.060577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:25.167756   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:25.167795   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:27.682279   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:27.693116   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:27.693189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:27.720687   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:27.720706   59960 cri.go:89] found id: ""
	I1126 20:13:27.720713   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:27.720765   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.724317   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:27.724388   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:27.751345   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:27.751369   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:27.751375   59960 cri.go:89] found id: ""
	I1126 20:13:27.751384   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:27.751445   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.755313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.758668   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:27.758738   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:27.788496   59960 cri.go:89] found id: ""
	I1126 20:13:27.788567   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.788592   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:27.788611   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:27.788703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:27.815714   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:27.815743   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:27.815749   59960 cri.go:89] found id: ""
	I1126 20:13:27.815757   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:27.815831   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.819360   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.822959   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:27.823038   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:27.853270   59960 cri.go:89] found id: ""
	I1126 20:13:27.853316   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.853326   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:27.853333   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:27.853403   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:27.880677   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:27.880701   59960 cri.go:89] found id: ""
	I1126 20:13:27.880710   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:27.880766   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.884425   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:27.884499   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:27.917060   59960 cri.go:89] found id: ""
	I1126 20:13:27.917126   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.917150   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:27.917183   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:27.917213   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:27.929246   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:27.929321   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:28.005492   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:27.995998   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.996970   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.999116   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.000043   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.001867   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:27.995998   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.996970   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.999116   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.000043   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.001867   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:28.005554   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:28.005581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:28.032388   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:28.032414   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:28.090244   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:28.090279   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:28.140049   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:28.140081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:28.217015   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:28.217052   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:28.252634   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:28.252663   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:28.356298   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:28.356347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:28.391198   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:28.391227   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:28.470669   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:28.470706   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:31.018712   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:31.029520   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:31.029594   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:31.067229   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:31.067249   59960 cri.go:89] found id: ""
	I1126 20:13:31.067257   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:31.067315   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.071728   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:31.071796   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:31.100937   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:31.101015   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:31.101024   59960 cri.go:89] found id: ""
	I1126 20:13:31.101032   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:31.101092   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.106006   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.109883   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:31.110020   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:31.140073   59960 cri.go:89] found id: ""
	I1126 20:13:31.140098   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.140107   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:31.140114   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:31.140177   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:31.170126   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:31.170150   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:31.170155   59960 cri.go:89] found id: ""
	I1126 20:13:31.170163   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:31.170220   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.175522   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.180015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:31.180137   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:31.216744   59960 cri.go:89] found id: ""
	I1126 20:13:31.216771   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.216781   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:31.216787   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:31.216847   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:31.244620   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:31.244653   59960 cri.go:89] found id: ""
	I1126 20:13:31.244661   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:31.244727   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.248677   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:31.248770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:31.275812   59960 cri.go:89] found id: ""
	I1126 20:13:31.275890   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.275914   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:31.275936   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:31.275972   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:31.308954   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:31.308981   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:31.404058   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:31.404140   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:31.449144   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:31.449177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:31.526538   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:31.526575   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:31.613358   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:31.613393   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:31.626272   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:31.626300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:31.701051   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:31.692350   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.693035   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.694572   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.695120   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.696599   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:31.692350   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.693035   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.694572   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.695120   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.696599   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:31.701076   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:31.701089   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:31.726047   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:31.726075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:31.770205   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:31.770246   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:31.800872   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:31.800898   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.331337   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:34.343013   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:34.343079   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:34.369127   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:34.369186   59960 cri.go:89] found id: ""
	I1126 20:13:34.369220   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:34.369305   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.372919   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:34.372984   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:34.400785   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:34.400806   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:34.400811   59960 cri.go:89] found id: ""
	I1126 20:13:34.400818   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:34.400871   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.404967   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.408568   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:34.408648   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:34.434956   59960 cri.go:89] found id: ""
	I1126 20:13:34.434981   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.434990   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:34.434996   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:34.435051   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:34.472918   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:34.472943   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:34.472948   59960 cri.go:89] found id: ""
	I1126 20:13:34.472956   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:34.473009   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.476556   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.480021   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:34.480097   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:34.506491   59960 cri.go:89] found id: ""
	I1126 20:13:34.506513   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.506522   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:34.506528   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:34.506587   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:34.534595   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.534618   59960 cri.go:89] found id: ""
	I1126 20:13:34.534627   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:34.534681   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.542373   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:34.542487   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:34.569404   59960 cri.go:89] found id: ""
	I1126 20:13:34.569439   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.569449   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:34.569473   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:34.569491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:34.594901   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:34.594926   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:34.661252   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:34.661357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:34.736470   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:34.736504   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.767635   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:34.767659   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:34.849541   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:34.849578   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:34.890089   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:34.890122   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:34.918362   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:34.918390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:34.955774   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:34.955800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:35.056965   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:35.057001   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:35.078639   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:35.078668   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:35.151655   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:35.143337   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.143918   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.145438   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.146046   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.147630   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:35.143337   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.143918   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.145438   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.146046   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.147630   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:37.653306   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:37.665236   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:37.665306   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:37.692381   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:37.692404   59960 cri.go:89] found id: ""
	I1126 20:13:37.692420   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:37.692475   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.696411   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:37.696485   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:37.733416   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:37.733447   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:37.733452   59960 cri.go:89] found id: ""
	I1126 20:13:37.733459   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:37.733512   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.737487   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.740759   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:37.740827   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:37.770540   59960 cri.go:89] found id: ""
	I1126 20:13:37.770563   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.770571   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:37.770578   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:37.770645   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:37.798542   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:37.798566   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:37.798572   59960 cri.go:89] found id: ""
	I1126 20:13:37.798579   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:37.798632   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.802507   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.806007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:37.806128   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:37.831752   59960 cri.go:89] found id: ""
	I1126 20:13:37.831780   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.831789   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:37.831796   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:37.831911   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:37.859491   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:37.859516   59960 cri.go:89] found id: ""
	I1126 20:13:37.859526   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:37.859608   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.863305   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:37.863407   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:37.890262   59960 cri.go:89] found id: ""
	I1126 20:13:37.890324   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.890347   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:37.890370   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:37.890389   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:37.915303   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:37.915334   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:38.015981   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:38.016018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:38.028479   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:38.028518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:38.117235   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:38.107607   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.108494   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.110529   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.111224   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.112955   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:38.107607   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.108494   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.110529   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.111224   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.112955   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:38.117268   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:38.117293   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:38.146073   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:38.146106   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:38.223055   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:38.223091   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:38.256738   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:38.256769   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:38.284204   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:38.284234   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:38.322205   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:38.322237   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:38.365768   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:38.365800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:40.946037   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:40.957084   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:40.957219   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:40.988160   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:40.988223   59960 cri.go:89] found id: ""
	I1126 20:13:40.988247   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:40.988330   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:40.991862   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:40.991975   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:41.021645   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:41.021671   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:41.021676   59960 cri.go:89] found id: ""
	I1126 20:13:41.021683   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:41.021776   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.025458   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.028751   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:41.028818   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:41.055272   59960 cri.go:89] found id: ""
	I1126 20:13:41.055297   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.055306   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:41.055313   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:41.055373   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:41.083272   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:41.083293   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:41.083298   59960 cri.go:89] found id: ""
	I1126 20:13:41.083306   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:41.083361   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.089116   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.092770   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:41.092882   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:41.119939   59960 cri.go:89] found id: ""
	I1126 20:13:41.119969   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.119978   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:41.119985   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:41.120085   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:41.149635   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:41.149657   59960 cri.go:89] found id: ""
	I1126 20:13:41.149666   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:41.149719   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.153346   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:41.153420   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:41.180294   59960 cri.go:89] found id: ""
	I1126 20:13:41.180320   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.180329   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:41.180338   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:41.180350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:41.207608   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:41.207638   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:41.250184   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:41.250217   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:41.280787   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:41.280815   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:41.350595   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:41.339246   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.340025   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.341777   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.342622   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.345147   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:41.339246   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.340025   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.341777   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.342622   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.345147   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:41.350618   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:41.350631   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:41.395571   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:41.395607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:41.471537   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:41.471576   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:41.503158   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:41.503187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:41.581612   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:41.581647   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:41.616210   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:41.616238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:41.712278   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:41.712311   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:44.224835   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:44.235354   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:44.235427   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:44.262020   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:44.262040   59960 cri.go:89] found id: ""
	I1126 20:13:44.262047   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:44.262100   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.266500   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:44.266621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:44.293469   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:44.293492   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:44.293498   59960 cri.go:89] found id: ""
	I1126 20:13:44.293515   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:44.293592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.297513   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.301293   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:44.301379   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:44.331229   59960 cri.go:89] found id: ""
	I1126 20:13:44.331252   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.331260   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:44.331266   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:44.331326   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:44.358510   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:44.358529   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:44.358534   59960 cri.go:89] found id: ""
	I1126 20:13:44.358540   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:44.358597   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.362369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.365719   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:44.365788   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:44.401237   59960 cri.go:89] found id: ""
	I1126 20:13:44.401303   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.401326   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:44.401348   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:44.401437   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:44.428506   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:44.428524   59960 cri.go:89] found id: ""
	I1126 20:13:44.428537   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:44.428592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.432302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:44.432379   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:44.461193   59960 cri.go:89] found id: ""
	I1126 20:13:44.461216   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.461225   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:44.461234   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:44.461245   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:44.472842   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:44.472911   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:44.552602   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:44.536833   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.537581   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.546763   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.547452   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.548655   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:44.536833   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.537581   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.546763   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.547452   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.548655   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:44.552629   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:44.552642   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:44.579143   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:44.579171   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:44.608447   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:44.608472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:44.634421   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:44.634447   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:44.669334   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:44.669362   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:44.770710   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:44.770785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:44.815986   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:44.816016   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:44.860293   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:44.860327   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:44.936110   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:44.936144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:47.514839   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:47.528244   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:47.528398   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:47.557240   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:47.557263   59960 cri.go:89] found id: ""
	I1126 20:13:47.557271   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:47.557328   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.561044   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:47.561146   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:47.586866   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:47.586888   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:47.586894   59960 cri.go:89] found id: ""
	I1126 20:13:47.586901   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:47.586956   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.591194   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.594829   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:47.594905   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:47.621081   59960 cri.go:89] found id: ""
	I1126 20:13:47.621104   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.621113   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:47.621120   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:47.621182   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:47.649583   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:47.649605   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:47.649610   59960 cri.go:89] found id: ""
	I1126 20:13:47.649618   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:47.649673   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.655090   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.659029   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:47.659096   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:47.685101   59960 cri.go:89] found id: ""
	I1126 20:13:47.685125   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.685134   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:47.685141   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:47.685198   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:47.712581   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:47.712603   59960 cri.go:89] found id: ""
	I1126 20:13:47.712612   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:47.712673   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.716384   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:47.716461   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:47.746287   59960 cri.go:89] found id: ""
	I1126 20:13:47.746321   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.746330   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:47.746357   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:47.746375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:47.776577   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:47.776607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:47.810845   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:47.810874   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:47.851317   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:47.851350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:47.897021   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:47.897054   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:47.925761   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:47.925792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:47.953836   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:47.953863   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:48.054533   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:48.054569   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:48.074474   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:48.074505   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:48.148938   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:48.137331   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.137950   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.139682   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.140242   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.143726   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:48.137331   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.137950   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.139682   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.140242   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.143726   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:48.148963   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:48.148977   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:48.231199   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:48.231234   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:50.823233   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:50.833805   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:50.833878   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:50.862309   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:50.862333   59960 cri.go:89] found id: ""
	I1126 20:13:50.862342   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:50.862396   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.865957   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:50.866034   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:50.892542   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:50.892565   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:50.892571   59960 cri.go:89] found id: ""
	I1126 20:13:50.892578   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:50.892632   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.896328   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.899831   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:50.899905   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:50.931031   59960 cri.go:89] found id: ""
	I1126 20:13:50.931098   59960 logs.go:282] 0 containers: []
	W1126 20:13:50.931112   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:50.931119   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:50.931176   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:50.958547   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:50.958580   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:50.958586   59960 cri.go:89] found id: ""
	I1126 20:13:50.958594   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:50.958649   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.962711   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.966380   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:50.966453   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:50.998188   59960 cri.go:89] found id: ""
	I1126 20:13:50.998483   59960 logs.go:282] 0 containers: []
	W1126 20:13:50.998498   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:50.998505   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:50.998592   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:51.031422   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:51.031447   59960 cri.go:89] found id: ""
	I1126 20:13:51.031462   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:51.031519   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:51.035715   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:51.035788   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:51.077429   59960 cri.go:89] found id: ""
	I1126 20:13:51.077452   59960 logs.go:282] 0 containers: []
	W1126 20:13:51.077460   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:51.077469   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:51.077481   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:51.105578   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:51.105609   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:51.188473   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:51.188518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:51.220853   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:51.220886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:51.304811   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:51.304848   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:51.337094   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:51.337162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:51.434145   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:51.434183   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:51.474781   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:51.474815   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:51.523360   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:51.523390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:51.556210   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:51.556238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:51.568960   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:51.568989   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:51.646125   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:51.637986   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.638634   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640319   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640884   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.642607   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:51.637986   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.638634   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640319   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640884   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.642607   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:54.147140   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:54.159570   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:54.159641   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:54.190129   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:54.190150   59960 cri.go:89] found id: ""
	I1126 20:13:54.190158   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:54.190221   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.193723   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:54.193795   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:54.221859   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:54.221881   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:54.221886   59960 cri.go:89] found id: ""
	I1126 20:13:54.221893   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:54.221986   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.225619   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.229615   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:54.229686   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:54.257427   59960 cri.go:89] found id: ""
	I1126 20:13:54.257454   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.257464   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:54.257470   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:54.257528   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:54.283499   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:54.283522   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:54.283528   59960 cri.go:89] found id: ""
	I1126 20:13:54.283535   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:54.283591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.287279   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.291072   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:54.291164   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:54.320377   59960 cri.go:89] found id: ""
	I1126 20:13:54.320409   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.320418   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:54.320424   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:54.320490   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:54.346357   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:54.346388   59960 cri.go:89] found id: ""
	I1126 20:13:54.346397   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:54.346453   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.350217   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:54.350337   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:54.387000   59960 cri.go:89] found id: ""
	I1126 20:13:54.387033   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.387042   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:54.387052   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:54.387064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:54.398981   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:54.399006   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:54.424733   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:54.424761   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:54.464124   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:54.464199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:54.516097   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:54.516149   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:54.597621   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:54.597656   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:54.626882   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:54.626916   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:54.706226   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:54.706262   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:54.777575   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:54.768229   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.769042   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.770705   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.771452   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.773075   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:54.768229   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.769042   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.770705   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.771452   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.773075   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:54.777599   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:54.777612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:54.808526   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:54.808556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:54.839385   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:54.839412   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:57.435357   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:57.446250   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:57.446321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:57.476511   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:57.476531   59960 cri.go:89] found id: ""
	I1126 20:13:57.476539   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:57.476595   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.480521   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:57.480599   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:57.508216   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:57.508239   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:57.508244   59960 cri.go:89] found id: ""
	I1126 20:13:57.508251   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:57.508312   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.512264   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.515930   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:57.516007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:57.546712   59960 cri.go:89] found id: ""
	I1126 20:13:57.546737   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.546746   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:57.546753   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:57.546811   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:57.575286   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:57.575308   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:57.575314   59960 cri.go:89] found id: ""
	I1126 20:13:57.575321   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:57.575403   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.579177   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.582844   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:57.582947   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:57.610240   59960 cri.go:89] found id: ""
	I1126 20:13:57.610268   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.610276   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:57.610282   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:57.610366   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:57.637690   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:57.637715   59960 cri.go:89] found id: ""
	I1126 20:13:57.637722   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:57.637804   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.641691   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:57.641816   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:57.673478   59960 cri.go:89] found id: ""
	I1126 20:13:57.673512   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.673521   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:57.673546   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:57.673565   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:57.724644   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:57.724677   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:57.801587   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:57.801622   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:57.846990   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:57.847020   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:57.948301   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:57.948336   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:57.960477   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:57.960510   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:58.036195   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:58.028003   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.028530   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030166   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030875   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.032666   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:58.028003   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.028530   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030166   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030875   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.032666   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:58.036262   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:58.036289   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:58.071247   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:58.071284   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:58.102552   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:58.102582   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:58.131358   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:58.131450   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:58.207844   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:58.207883   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:00.754664   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:00.765702   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:00.765771   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:00.806554   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:00.806579   59960 cri.go:89] found id: ""
	I1126 20:14:00.806587   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:00.806641   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.810501   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:00.810586   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:00.838112   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:00.838139   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:00.838144   59960 cri.go:89] found id: ""
	I1126 20:14:00.838152   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:00.838207   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.842001   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.845613   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:00.845684   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:00.874701   59960 cri.go:89] found id: ""
	I1126 20:14:00.874726   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.874735   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:00.874742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:00.874821   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:00.903003   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:00.903027   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:00.903032   59960 cri.go:89] found id: ""
	I1126 20:14:00.903039   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:00.903097   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.907398   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.911095   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:00.911169   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:00.937717   59960 cri.go:89] found id: ""
	I1126 20:14:00.937741   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.937750   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:00.937757   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:00.937815   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:00.964659   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:00.964683   59960 cri.go:89] found id: ""
	I1126 20:14:00.964692   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:00.964761   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.969052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:00.969128   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:00.996896   59960 cri.go:89] found id: ""
	I1126 20:14:00.996921   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.996930   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:00.996940   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:00.996968   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:01.052982   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:01.053013   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:01.164358   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:01.164396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:01.245847   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:01.237260   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.238200   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.239244   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.240970   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.241435   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:01.237260   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.238200   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.239244   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.240970   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.241435   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:01.245874   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:01.245888   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:01.278036   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:01.278066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:01.321761   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:01.321798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:01.349850   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:01.349877   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:01.362087   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:01.362115   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:01.406110   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:01.406143   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:01.488538   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:01.488580   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:01.524108   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:01.524314   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:04.107171   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:04.119134   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:04.119206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:04.150892   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:04.150913   59960 cri.go:89] found id: ""
	I1126 20:14:04.150920   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:04.150993   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.154614   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:04.154713   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:04.181842   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:04.181866   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:04.181870   59960 cri.go:89] found id: ""
	I1126 20:14:04.181878   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:04.181958   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.185706   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.189884   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:04.190033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:04.217117   59960 cri.go:89] found id: ""
	I1126 20:14:04.217143   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.217152   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:04.217159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:04.217218   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:04.244873   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:04.244893   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:04.244897   59960 cri.go:89] found id: ""
	I1126 20:14:04.244904   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:04.244962   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.248633   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.252113   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:04.252223   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:04.281381   59960 cri.go:89] found id: ""
	I1126 20:14:04.281410   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.281420   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:04.281426   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:04.281484   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:04.309793   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:04.309817   59960 cri.go:89] found id: ""
	I1126 20:14:04.309825   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:04.309881   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.313555   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:04.313625   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:04.341073   59960 cri.go:89] found id: ""
	I1126 20:14:04.341100   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.341109   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:04.341117   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:04.341129   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:04.436704   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:04.436741   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:04.511848   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:04.500099   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.500700   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506376   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506925   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.508357   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:04.500099   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.500700   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506376   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506925   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.508357   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:04.511872   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:04.511887   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:04.572587   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:04.572662   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:04.622150   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:04.622182   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:04.648129   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:04.648200   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:04.736436   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:04.736472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:04.748750   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:04.748783   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:04.784731   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:04.784756   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:04.861032   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:04.861067   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:04.888273   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:04.888306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:07.422077   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:07.432698   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:07.432776   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:07.463525   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:07.463545   59960 cri.go:89] found id: ""
	I1126 20:14:07.463553   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:07.463605   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.467175   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:07.467243   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:07.497801   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:07.497821   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:07.497826   59960 cri.go:89] found id: ""
	I1126 20:14:07.497833   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:07.497888   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.501759   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.505120   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:07.505198   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:07.539084   59960 cri.go:89] found id: ""
	I1126 20:14:07.539112   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.539121   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:07.539127   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:07.539189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:07.567688   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:07.567713   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:07.567720   59960 cri.go:89] found id: ""
	I1126 20:14:07.567727   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:07.567788   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.571445   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.575895   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:07.575973   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:07.603679   59960 cri.go:89] found id: ""
	I1126 20:14:07.603704   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.603713   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:07.603720   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:07.603801   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:07.633845   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:07.633869   59960 cri.go:89] found id: ""
	I1126 20:14:07.633877   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:07.633982   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.638439   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:07.638510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:07.669305   59960 cri.go:89] found id: ""
	I1126 20:14:07.669329   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.669338   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:07.669348   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:07.669361   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:07.746001   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:07.746039   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:07.773829   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:07.773859   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:07.806673   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:07.806705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:07.847992   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:07.848029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:07.876479   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:07.876507   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:07.952982   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:07.953018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:08.054195   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:08.054235   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:08.071790   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:08.071819   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:08.158168   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:08.148798   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.150262   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.151831   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.152401   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.154098   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:08.148798   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.150262   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.151831   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.152401   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.154098   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:08.158237   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:08.158266   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:08.185227   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:08.185257   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:10.730401   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:10.741460   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:10.741529   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:10.774241   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:10.774263   59960 cri.go:89] found id: ""
	I1126 20:14:10.774270   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:10.774327   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.778033   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:10.778103   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:10.806991   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:10.807015   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:10.807021   59960 cri.go:89] found id: ""
	I1126 20:14:10.807028   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:10.807083   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.810846   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.814441   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:10.814513   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:10.843200   59960 cri.go:89] found id: ""
	I1126 20:14:10.843226   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.843236   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:10.843242   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:10.843301   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:10.871039   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:10.871062   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:10.871068   59960 cri.go:89] found id: ""
	I1126 20:14:10.871075   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:10.871129   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.874747   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.878577   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:10.878661   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:10.907317   59960 cri.go:89] found id: ""
	I1126 20:14:10.907343   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.907352   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:10.907359   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:10.907414   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:10.936274   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:10.936297   59960 cri.go:89] found id: ""
	I1126 20:14:10.936306   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:10.936385   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.939976   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:10.940048   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:10.969776   59960 cri.go:89] found id: ""
	I1126 20:14:10.969848   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.969884   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:10.969911   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:10.969997   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:11.067923   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:11.067964   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:11.082749   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:11.082781   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:11.124244   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:11.124281   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:11.173196   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:11.173232   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:11.200233   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:11.200268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:11.284292   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:11.284327   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:11.317517   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:11.317545   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:11.395020   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:11.386165   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.387087   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388651   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388979   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.390832   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:11.386165   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.387087   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388651   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388979   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.390832   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:11.395043   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:11.395056   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:11.422025   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:11.422059   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:11.500554   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:11.500588   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.028990   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:14.043196   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:14.043275   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:14.078393   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:14.078418   59960 cri.go:89] found id: ""
	I1126 20:14:14.078426   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:14.078485   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.082581   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:14.082679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:14.113586   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:14.113611   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:14.113616   59960 cri.go:89] found id: ""
	I1126 20:14:14.113623   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:14.113677   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.117367   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.120847   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:14.120921   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:14.147191   59960 cri.go:89] found id: ""
	I1126 20:14:14.147214   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.147222   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:14.147229   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:14.147287   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:14.173461   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:14.173483   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.173489   59960 cri.go:89] found id: ""
	I1126 20:14:14.173496   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:14.173560   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.177359   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.180846   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:14.180926   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:14.211699   59960 cri.go:89] found id: ""
	I1126 20:14:14.211731   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.211740   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:14.211747   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:14.211815   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:14.245320   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:14.245343   59960 cri.go:89] found id: ""
	I1126 20:14:14.245352   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:14.245422   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.249066   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:14.249133   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:14.277385   59960 cri.go:89] found id: ""
	I1126 20:14:14.277407   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.277415   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:14.277424   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:14.277436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:14.289839   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:14.289866   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:14.361142   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:14.352896   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.353542   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355081   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355655   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.357173   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:14.352896   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.353542   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355081   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355655   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.357173   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:14.361165   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:14.361179   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:14.419666   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:14.419762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:14.468633   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:14.468667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:14.557664   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:14.557696   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:14.583538   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:14.583567   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.612806   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:14.612834   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:14.638272   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:14.638300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:14.721230   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:14.721268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:14.755109   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:14.755142   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:17.358125   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:17.371898   59960 out.go:203] 
	W1126 20:14:17.375212   59960 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1126 20:14:17.375248   59960 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1126 20:14:17.375258   59960 out.go:285] * Related issues:
	W1126 20:14:17.375279   59960 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1126 20:14:17.375299   59960 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1126 20:14:17.378409   59960 out.go:203] 
	
	
	==> CRI-O <==
	Nov 26 20:07:27 ha-278127 crio[667]: time="2025-11-26T20:07:27.974719211Z" level=info msg="Started container" PID=1450 containerID=0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee description=kube-system/kube-controller-manager-ha-278127/kube-controller-manager id=87dec93c-7b21-4bf6-943c-261f225c113f name=/runtime.v1.RuntimeService/StartContainer sandboxID=aaf24b4012ae22573565b29a9c87fa6c77cadf206a779d5e6c1de76d289f128f
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.929319714Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ec2c398f-23e5-463c-bbb1-09030f312307 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.930440903Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8fc66d00-8c37-4d25-84c6-7d7ac1c54ce3 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.932121756Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5c15308b-e98f-4109-8cbc-9192ac697f01 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.932226698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.940571173Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.940960238Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8f34edad928de60e13d64480bf036aa1cf6b11ecfb7c751ef02ef81267e506bc/merged/etc/passwd: no such file or directory"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.941066542Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f34edad928de60e13d64480bf036aa1cf6b11ecfb7c751ef02ef81267e506bc/merged/etc/group: no such file or directory"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.941381721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.959928416Z" level=info msg="Created container 1de9ee4cdf6523ba82be553073f7f95b567b3080cf0b35a8910ac6dcf51abbd5: kube-system/storage-provisioner/storage-provisioner" id=5c15308b-e98f-4109-8cbc-9192ac697f01 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.960936581Z" level=info msg="Starting container: 1de9ee4cdf6523ba82be553073f7f95b567b3080cf0b35a8910ac6dcf51abbd5" id=51eb399f-be44-48a0-a1b4-1c62267c418c name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.967526563Z" level=info msg="Started container" PID=1462 containerID=1de9ee4cdf6523ba82be553073f7f95b567b3080cf0b35a8910ac6dcf51abbd5 description=kube-system/storage-provisioner/storage-provisioner id=51eb399f-be44-48a0-a1b4-1c62267c418c name=/runtime.v1.RuntimeService/StartContainer sandboxID=21dd814126bdbbb8dab349806b778ddb306dc5100a35c1bd2fe40c8004bcd523
	Nov 26 20:07:44 ha-278127 conmon[1447]: conmon 0e221d151c3ca5256368 <ninfo>: container 1450 exited with status 1
	Nov 26 20:07:45 ha-278127 crio[667]: time="2025-11-26T20:07:45.240819859Z" level=info msg="Removing container: c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9" id=6f335103-7e48-492e-b33a-d6d488e111fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:07:45 ha-278127 crio[667]: time="2025-11-26T20:07:45.256615675Z" level=info msg="Error loading conmon cgroup of container c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9: cgroup deleted" id=6f335103-7e48-492e-b33a-d6d488e111fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:07:45 ha-278127 crio[667]: time="2025-11-26T20:07:45.261280075Z" level=info msg="Removed container c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9: kube-system/kube-controller-manager-ha-278127/kube-controller-manager" id=6f335103-7e48-492e-b33a-d6d488e111fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.929977452Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c9fc5566-53be-4e3a-ad5b-047dfe5df6f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.931894512Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c6b73409-e91d-4450-8804-870ca6e0b63d name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.933188155Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-278127/kube-controller-manager" id=b5b42e4a-b813-4466-87cd-d441eaaf849b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.933308096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.94134128Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.942037763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.965749324Z" level=info msg="Created container b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca: kube-system/kube-controller-manager-ha-278127/kube-controller-manager" id=b5b42e4a-b813-4466-87cd-d441eaaf849b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.966758303Z" level=info msg="Starting container: b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca" id=d8573d49-5a20-4657-b169-a7727449cf6d name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.975098568Z" level=info msg="Started container" PID=1498 containerID=b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca description=kube-system/kube-controller-manager-ha-278127/kube-controller-manager id=d8573d49-5a20-4657-b169-a7727449cf6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=aaf24b4012ae22573565b29a9c87fa6c77cadf206a779d5e6c1de76d289f128f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	b3d2b3bea3b9f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Running             kube-controller-manager   6                   aaf24b4012ae2       kube-controller-manager-ha-278127   kube-system
	1de9ee4cdf652       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   6 minutes ago       Running             storage-provisioner       5                   21dd814126bdb       storage-provisioner                 kube-system
	0e221d151c3ca       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   6 minutes ago       Exited              kube-controller-manager   5                   aaf24b4012ae2       kube-controller-manager-ha-278127   kube-system
	1a9b5dae15334       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       4                   21dd814126bdb       storage-provisioner                 kube-system
	1622dad7c067a       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Running             kube-vip                  3                   d4cb99de55854       kube-vip-ha-278127                  kube-system
	822876229de0f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   dfdbe4360041c       coredns-66bc5c9577-ndh8k            kube-system
	aef907239d286       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   2                   78d3fb27335b4       busybox-7b57f96db7-vwpd8            default
	787754735cfed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   2                   89e2c226e09e6       coredns-66bc5c9577-bbpk7            kube-system
	d140d1950675e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               2                   b9a376ab09c3c       kindnet-gp24m                       kube-system
	7b45294efb449       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                2                   55fa9dab05c0d       kube-proxy-5fndw                    kube-system
	f5647f1652cc1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   7 minutes ago       Running             kube-apiserver            3                   c932fd4498a66       kube-apiserver-ha-278127            kube-system
	040a854900180       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   7 minutes ago       Running             kube-scheduler            2                   773a6356cec93       kube-scheduler-ha-278127            kube-system
	106da3c0ad4fa       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Exited              kube-vip                  2                   d4cb99de55854       kube-vip-ha-278127                  kube-system
	cdc1651fea8f1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   7 minutes ago       Running             etcd                      2                   11d5891e684b3       etcd-ha-278127                      kube-system
	
	
	==> coredns [787754735cfed2e99ff1e0336a870da9b5e17eaed8d9d79b97dbfa75dd83059c] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45898 - 29384 "HINFO IN 3170256484025904488.3791759156995599050. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014293297s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [822876229de0f6cb25db3449774153712b72a0c129090a61a1aeadc760c6cad4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53615 - 2115 "HINFO IN 6991506871979899616.8642824612935885209. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017055518s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-278127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T19_58_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:58:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:14:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:13:01 +0000   Wed, 26 Nov 2025 19:58:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:13:01 +0000   Wed, 26 Nov 2025 19:58:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:13:01 +0000   Wed, 26 Nov 2025 19:58:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:13:01 +0000   Wed, 26 Nov 2025 19:59:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-278127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                370e19a1-8269-418f-82ce-e7791d2f9cc5
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vwpd8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-bbpk7             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 coredns-66bc5c9577-ndh8k             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-ha-278127                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-gp24m                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-278127             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-278127    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-5fndw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-278127             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-278127                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m44s                  kube-proxy       
	  Normal   Starting                 9m36s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)      kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-278127 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m33s                  node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           9m32s                  node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           9m2s                   node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   Starting                 7m55s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m55s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  7m54s (x8 over 7m55s)  kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m54s (x8 over 7m55s)  kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m54s (x8 over 7m55s)  kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m7s                   node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	
	
	Name:               ha-278127-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_26T19_58_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:58:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:05:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-278127-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                77d88c20-b1f3-431d-ace6-24a69c640dde
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-72bpv                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-278127-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-x82cz                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-278127-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-278127-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-p4455                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-278127-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-278127-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 9m17s              kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           15m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-278127-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             11m                node-controller  Node ha-278127-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           10m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-278127-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           9m33s              node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           9m32s              node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           9m2s               node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           6m7s               node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   NodeNotReady             5m17s              node-controller  Node ha-278127-m02 status is now: NodeNotReady
	
	
	Name:               ha-278127-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_26T20_01_35_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:01:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:05:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-278127-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                4949defc-dfd6-4bc6-9c78-3cb968da2b3e
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hqq6q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	  kube-system                 kindnet-qbd6w               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-proxy-d4p99            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 8m48s                 kube-proxy       
	  Normal   Starting                 12m                   kube-proxy       
	  Warning  CgroupV1                 12m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)     kubelet          Node ha-278127-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)     kubelet          Node ha-278127-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)     kubelet          Node ha-278127-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 12m                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                   node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           12m                   node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           12m                   node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   NodeReady                12m                   kubelet          Node ha-278127-m04 status is now: NodeReady
	  Normal   RegisteredNode           10m                   node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           9m33s                 node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           9m32s                 node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   Starting                 9m11s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m11s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m8s (x8 over 9m11s)  kubelet          Node ha-278127-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m8s (x8 over 9m11s)  kubelet          Node ha-278127-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m8s (x8 over 9m11s)  kubelet          Node ha-278127-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m2s                  node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           6m7s                  node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   NodeNotReady             5m17s                 node-controller  Node ha-278127-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Nov26 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014220] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507172] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032749] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.773464] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.697672] kauditd_printk_skb: 36 callbacks suppressed
	[Nov26 19:37] overlayfs: idmapped layers are currently not supported
	[  +0.074077] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov26 19:39] hrtimer: interrupt took 16123050 ns
	[Nov26 19:43] overlayfs: idmapped layers are currently not supported
	[Nov26 19:44] overlayfs: idmapped layers are currently not supported
	[Nov26 19:58] overlayfs: idmapped layers are currently not supported
	[ +33.942210] overlayfs: idmapped layers are currently not supported
	[Nov26 19:59] overlayfs: idmapped layers are currently not supported
	[Nov26 20:01] overlayfs: idmapped layers are currently not supported
	[Nov26 20:02] overlayfs: idmapped layers are currently not supported
	[Nov26 20:04] overlayfs: idmapped layers are currently not supported
	[  +3.105496] overlayfs: idmapped layers are currently not supported
	[ +37.228314] overlayfs: idmapped layers are currently not supported
	[Nov26 20:05] overlayfs: idmapped layers are currently not supported
	[Nov26 20:06] overlayfs: idmapped layers are currently not supported
	[  +3.713866] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cdc1651fea8f10bd665928dcc7bb174b74385eb06e911da9629df17c0d9d29e8] <==
	{"level":"info","ts":"2025-11-26T20:08:15.335606Z","caller":"traceutil/trace.go:172","msg":"trace[1383728067] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:2; response_revision:2566; }","duration":"123.275763ms","start":"2025-11-26T20:08:15.212323Z","end":"2025-11-26T20:08:15.335599Z","steps":["trace[1383728067] 'agreement among raft nodes before linearized reading'  (duration: 123.198694ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.351724Z","caller":"traceutil/trace.go:172","msg":"trace[1874297602] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2567; }","duration":"115.025762ms","start":"2025-11-26T20:08:15.236689Z","end":"2025-11-26T20:08:15.351715Z","steps":["trace[1874297602] 'agreement among raft nodes before linearized reading'  (duration: 114.988281ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.353572Z","caller":"traceutil/trace.go:172","msg":"trace[590005640] range","detail":"{range_begin:/registry/cronjobs; range_end:; response_count:0; response_revision:2567; }","duration":"117.001923ms","start":"2025-11-26T20:08:15.236561Z","end":"2025-11-26T20:08:15.353563Z","steps":["trace[590005640] 'agreement among raft nodes before linearized reading'  (duration: 116.956164ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.353840Z","caller":"traceutil/trace.go:172","msg":"trace[1252963882] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:2567; }","duration":"117.289377ms","start":"2025-11-26T20:08:15.236544Z","end":"2025-11-26T20:08:15.353834Z","steps":["trace[1252963882] 'agreement among raft nodes before linearized reading'  (duration: 117.256032ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.353913Z","caller":"traceutil/trace.go:172","msg":"trace[297213381] range","detail":"{range_begin:/registry/roles; range_end:; response_count:0; response_revision:2567; }","duration":"117.437904ms","start":"2025-11-26T20:08:15.236470Z","end":"2025-11-26T20:08:15.353908Z","steps":["trace[297213381] 'agreement among raft nodes before linearized reading'  (duration: 117.416234ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.364849Z","caller":"traceutil/trace.go:172","msg":"trace[1421861513] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:59; response_revision:2567; }","duration":"128.412849ms","start":"2025-11-26T20:08:15.236425Z","end":"2025-11-26T20:08:15.364838Z","steps":["trace[1421861513] 'agreement among raft nodes before linearized reading'  (duration: 128.131786ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.364893Z","caller":"traceutil/trace.go:172","msg":"trace[1461250281] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:2567; }","duration":"128.480491ms","start":"2025-11-26T20:08:15.236409Z","end":"2025-11-26T20:08:15.364889Z","steps":["trace[1461250281] 'agreement among raft nodes before linearized reading'  (duration: 128.461948ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.364921Z","caller":"traceutil/trace.go:172","msg":"trace[502786890] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:2567; }","duration":"128.524388ms","start":"2025-11-26T20:08:15.236393Z","end":"2025-11-26T20:08:15.364917Z","steps":["trace[502786890] 'agreement among raft nodes before linearized reading'  (duration: 128.51112ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.364974Z","caller":"traceutil/trace.go:172","msg":"trace[1598355909] range","detail":"{range_begin:/registry/ipaddresses/; range_end:/registry/ipaddresses0; response_count:2; response_revision:2567; }","duration":"128.579657ms","start":"2025-11-26T20:08:15.236389Z","end":"2025-11-26T20:08:15.364969Z","steps":["trace[1598355909] 'agreement among raft nodes before linearized reading'  (duration: 128.540937ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365001Z","caller":"traceutil/trace.go:172","msg":"trace[640320053] range","detail":"{range_begin:/registry/daemonsets; range_end:; response_count:0; response_revision:2567; }","duration":"128.6531ms","start":"2025-11-26T20:08:15.236344Z","end":"2025-11-26T20:08:15.364998Z","steps":["trace[640320053] 'agreement among raft nodes before linearized reading'  (duration: 128.639283ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365081Z","caller":"traceutil/trace.go:172","msg":"trace[703339521] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2567; }","duration":"128.762571ms","start":"2025-11-26T20:08:15.236311Z","end":"2025-11-26T20:08:15.365074Z","steps":["trace[703339521] 'agreement among raft nodes before linearized reading'  (duration: 128.697349ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365157Z","caller":"traceutil/trace.go:172","msg":"trace[879094705] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:4; response_revision:2567; }","duration":"128.947693ms","start":"2025-11-26T20:08:15.236204Z","end":"2025-11-26T20:08:15.365152Z","steps":["trace[879094705] 'agreement among raft nodes before linearized reading'  (duration: 128.887427ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365183Z","caller":"traceutil/trace.go:172","msg":"trace[1712061630] range","detail":"{range_begin:/registry/ingress; range_end:; response_count:0; response_revision:2567; }","duration":"129.057033ms","start":"2025-11-26T20:08:15.236122Z","end":"2025-11-26T20:08:15.365179Z","steps":["trace[1712061630] 'agreement among raft nodes before linearized reading'  (duration: 129.044151ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365210Z","caller":"traceutil/trace.go:172","msg":"trace[884725043] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations; range_end:; response_count:0; response_revision:2567; }","duration":"130.176311ms","start":"2025-11-26T20:08:15.235029Z","end":"2025-11-26T20:08:15.365206Z","steps":["trace[884725043] 'agreement among raft nodes before linearized reading'  (duration: 130.162199ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365235Z","caller":"traceutil/trace.go:172","msg":"trace[1960126933] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:2567; }","duration":"138.218251ms","start":"2025-11-26T20:08:15.227012Z","end":"2025-11-26T20:08:15.365231Z","steps":["trace[1960126933] 'agreement among raft nodes before linearized reading'  (duration: 138.206222ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365306Z","caller":"traceutil/trace.go:172","msg":"trace[700774855] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:12; response_revision:2567; }","duration":"138.316595ms","start":"2025-11-26T20:08:15.226986Z","end":"2025-11-26T20:08:15.365302Z","steps":["trace[700774855] 'agreement among raft nodes before linearized reading'  (duration: 138.256756ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365332Z","caller":"traceutil/trace.go:172","msg":"trace[1878756393] range","detail":"{range_begin:/registry/resourceclaims/; range_end:/registry/resourceclaims0; response_count:0; response_revision:2567; }","duration":"138.360049ms","start":"2025-11-26T20:08:15.226968Z","end":"2025-11-26T20:08:15.365328Z","steps":["trace[1878756393] 'agreement among raft nodes before linearized reading'  (duration: 138.347619ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365357Z","caller":"traceutil/trace.go:172","msg":"trace[2116024509] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:2567; }","duration":"138.462432ms","start":"2025-11-26T20:08:15.226891Z","end":"2025-11-26T20:08:15.365354Z","steps":["trace[2116024509] 'agreement among raft nodes before linearized reading'  (duration: 138.449927ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365434Z","caller":"traceutil/trace.go:172","msg":"trace[1377873000] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:11; response_revision:2567; }","duration":"138.557683ms","start":"2025-11-26T20:08:15.226872Z","end":"2025-11-26T20:08:15.365429Z","steps":["trace[1377873000] 'agreement among raft nodes before linearized reading'  (duration: 138.494029ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365486Z","caller":"traceutil/trace.go:172","msg":"trace[251490351] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:4; response_revision:2567; }","duration":"138.671406ms","start":"2025-11-26T20:08:15.226810Z","end":"2025-11-26T20:08:15.365482Z","steps":["trace[251490351] 'agreement among raft nodes before linearized reading'  (duration: 138.633211ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365537Z","caller":"traceutil/trace.go:172","msg":"trace[570012177] range","detail":"{range_begin:/registry/daemonsets/; range_end:/registry/daemonsets0; response_count:2; response_revision:2567; }","duration":"138.744439ms","start":"2025-11-26T20:08:15.226789Z","end":"2025-11-26T20:08:15.365533Z","steps":["trace[570012177] 'agreement among raft nodes before linearized reading'  (duration: 138.706334ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365586Z","caller":"traceutil/trace.go:172","msg":"trace[1618327843] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; response_count:3; response_revision:2567; }","duration":"138.820441ms","start":"2025-11-26T20:08:15.226762Z","end":"2025-11-26T20:08:15.365583Z","steps":["trace[1618327843] 'agreement among raft nodes before linearized reading'  (duration: 138.784002ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365726Z","caller":"traceutil/trace.go:172","msg":"trace[1190967021] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:44; response_revision:2567; }","duration":"138.982458ms","start":"2025-11-26T20:08:15.226740Z","end":"2025-11-26T20:08:15.365722Z","steps":["trace[1190967021] 'agreement among raft nodes before linearized reading'  (duration: 138.855731ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365752Z","caller":"traceutil/trace.go:172","msg":"trace[191199000] range","detail":"{range_begin:/registry/ipaddresses; range_end:; response_count:0; response_revision:2567; }","duration":"139.0245ms","start":"2025-11-26T20:08:15.226723Z","end":"2025-11-26T20:08:15.365747Z","steps":["trace[191199000] 'agreement among raft nodes before linearized reading'  (duration: 139.012775ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-26T20:08:15.365777Z","caller":"traceutil/trace.go:172","msg":"trace[338323478] range","detail":"{range_begin:/registry/deviceclasses/; range_end:/registry/deviceclasses0; response_count:0; response_revision:2567; }","duration":"139.071482ms","start":"2025-11-26T20:08:15.226701Z","end":"2025-11-26T20:08:15.365773Z","steps":["trace[338323478] 'agreement among raft nodes before linearized reading'  (duration: 139.05988ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:14:26 up 56 min,  0 user,  load average: 1.10, 1.13, 1.23
	Linux ha-278127 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d140d1950675ee8ccd9c84ef7a5a7da1b1e44300cc3e3a958c71e1138816061f] <==
	I1126 20:13:42.226790       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:13:52.226003       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:13:52.226037       1 main.go:301] handling current node
	I1126 20:13:52.226054       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:13:52.226060       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:13:52.226201       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:13:52.226262       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:14:02.232091       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:14:02.232128       1 main.go:301] handling current node
	I1126 20:14:02.232146       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:14:02.232153       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:14:02.232327       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:14:02.232341       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:14:12.226411       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:14:12.226443       1 main.go:301] handling current node
	I1126 20:14:12.226460       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:14:12.226467       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:14:12.226646       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:14:12.226661       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:14:22.226927       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:14:22.227048       1 main.go:301] handling current node
	I1126 20:14:22.227109       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:14:22.227140       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:14:22.227311       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:14:22.227350       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [f5647f1652cc11a195a49a98906391e791c3136916a5e3c249907585088fad42] <==
	{"level":"warn","ts":"2025-11-26T20:08:15.185150Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019681e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185302Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400264b2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185460Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001969860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185569Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40023790e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185752Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a24960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185791Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002218000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.188111Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400089eb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.188335Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002471680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190353Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400264b2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190396Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f503c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190413Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029423c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190430Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001969860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190463Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a3b860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190481Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002378000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190499Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190513Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190529Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a24960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190727Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400089e000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	W1126 20:08:17.152713       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1126 20:08:17.154506       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:08:17.162706       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:08:19.148616       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:08:22.296241       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:09:09.201336       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:09:09.262823       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee] <==
	I1126 20:07:29.733675       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:07:30.451982       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1126 20:07:30.452014       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:07:30.453426       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1126 20:07:30.453688       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1126 20:07:30.453871       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1126 20:07:30.453945       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1126 20:07:44.473711       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca] <==
	E1126 20:08:39.054180       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:39.054188       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:39.054196       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:39.054201       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054573       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054603       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054612       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054617       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054623       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	I1126 20:08:59.075009       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mttpp"
	I1126 20:08:59.108301       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mttpp"
	I1126 20:08:59.108397       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-278127-m03"
	I1126 20:08:59.137341       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-278127-m03"
	I1126 20:08:59.137379       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-cjs7r"
	I1126 20:08:59.170242       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-cjs7r"
	I1126 20:08:59.170364       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-278127-m03"
	I1126 20:08:59.200927       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-278127-m03"
	I1126 20:08:59.201053       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-278127-m03"
	I1126 20:08:59.231029       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-278127-m03"
	I1126 20:08:59.231129       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-278127-m03"
	I1126 20:08:59.266325       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-278127-m03"
	I1126 20:08:59.266427       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-278127-m03"
	I1126 20:08:59.307467       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-278127-m03"
	I1126 20:14:09.243470       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-hqq6q"
	I1126 20:14:19.320009       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-72bpv"
	
	
	==> kube-proxy [7b45294efb44968b6b5d7d6994b3f6f118094d33ccfb9aa9a125e9d6110f41b3] <==
	I1126 20:07:27.549779       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	I1126 20:07:27.549805       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	I1126 20:07:27.549666       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	E1126 20:07:31.630334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:31.630336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:31.630470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:31.630581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:07:34.702391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:34.702403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:07:34.702509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:34.702664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:41.518262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:41.518267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:41.518397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:41.518465       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1126 20:07:41.518496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:07:52.462253       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1126 20:07:52.462312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:52.462400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:55.534388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:55.534401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:08:05.710253       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1126 20:08:08.782267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:08:11.854307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:08:14.930219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [040a8549001808f2d3fce3d4cf9f8dff272706173960c5e8004af8b1ea042e80] <==
	I1126 20:06:34.800738       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:06:39.572983       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:06:39.573028       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:06:39.573039       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:06:39.573046       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:06:39.693522       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:06:39.693624       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:06:39.703802       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:06:39.704071       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:06:39.715887       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:06:39.704092       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:06:39.816440       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:07:21 ha-278127 kubelet[805]: E1126 20:07:21.263300     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:23 ha-278127 kubelet[805]: E1126 20:07:23.240740     805 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-278127.187ba7448d330dec  default   2559 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-278127,UID:ha-278127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-278127 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-278127,},FirstTimestamp:2025-11-26 20:06:31 +0000 UTC,LastTimestamp:2025-11-26 20:06:32.032348366 +0000 UTC m=+0.308576049,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-278127,}"
	Nov 26 20:07:27 ha-278127 kubelet[805]: I1126 20:07:27.929241     805 scope.go:117] "RemoveContainer" containerID="c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9"
	Nov 26 20:07:28 ha-278127 kubelet[805]: I1126 20:07:28.928664     805 scope.go:117] "RemoveContainer" containerID="1a9b5dae1533404a7bf684e278d137906a4f310cb5682e61046be41540e6f32b"
	Nov 26 20:07:31 ha-278127 kubelet[805]: E1126 20:07:31.162433     805 controller.go:195] "Failed to update lease" err="Put \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:31 ha-278127 kubelet[805]: E1126 20:07:31.265440     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes ha-278127)"
	Nov 26 20:07:41 ha-278127 kubelet[805]: E1126 20:07:41.163428     805 controller.go:195] "Failed to update lease" err="Put \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:41 ha-278127 kubelet[805]: I1126 20:07:41.163974     805 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Nov 26 20:07:41 ha-278127 kubelet[805]: E1126 20:07:41.266735     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:41 ha-278127 kubelet[805]: E1126 20:07:41.266930     805 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count"
	Nov 26 20:07:45 ha-278127 kubelet[805]: I1126 20:07:45.237637     805 scope.go:117] "RemoveContainer" containerID="c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9"
	Nov 26 20:07:45 ha-278127 kubelet[805]: I1126 20:07:45.238084     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	Nov 26 20:07:45 ha-278127 kubelet[805]: E1126 20:07:45.238254     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-278127_kube-system(5eb8d26456c3b783869be39bb80c3519)\"" pod="kube-system/kube-controller-manager-ha-278127" podUID="5eb8d26456c3b783869be39bb80c3519"
	Nov 26 20:07:47 ha-278127 kubelet[805]: I1126 20:07:47.402612     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	Nov 26 20:07:47 ha-278127 kubelet[805]: E1126 20:07:47.402814     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-278127_kube-system(5eb8d26456c3b783869be39bb80c3519)\"" pod="kube-system/kube-controller-manager-ha-278127" podUID="5eb8d26456c3b783869be39bb80c3519"
	Nov 26 20:07:49 ha-278127 kubelet[805]: E1126 20:07:49.241093     805 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kindnet-gp24m)" podUID="4d3597e4-de22-4f29-8c58-1aaabd4a8a56" pod="kube-system/kindnet-gp24m"
	Nov 26 20:07:51 ha-278127 kubelet[805]: E1126 20:07:51.165080     805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms"
	Nov 26 20:07:57 ha-278127 kubelet[805]: E1126 20:07:57.243812     805 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-278127.187ba7448d32cbe5  default   2561 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-278127,UID:ha-278127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-278127 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-278127,},FirstTimestamp:2025-11-26 20:06:31 +0000 UTC,LastTimestamp:2025-11-26 20:06:32.033252015 +0000 UTC m=+0.309479698,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-278127,}"
	Nov 26 20:08:00 ha-278127 kubelet[805]: I1126 20:08:00.928844     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	Nov 26 20:08:00 ha-278127 kubelet[805]: E1126 20:08:00.929077     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-278127_kube-system(5eb8d26456c3b783869be39bb80c3519)\"" pod="kube-system/kube-controller-manager-ha-278127" podUID="5eb8d26456c3b783869be39bb80c3519"
	Nov 26 20:08:01 ha-278127 kubelet[805]: E1126 20:08:01.366584     805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	Nov 26 20:08:01 ha-278127 kubelet[805]: E1126 20:08:01.649883     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recurs
iveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"ha-278127\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-278127/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:08:11 ha-278127 kubelet[805]: E1126 20:08:11.650209     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:08:11 ha-278127 kubelet[805]: E1126 20:08:11.768381     805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": context deadline exceeded" interval="800ms"
	Nov 26 20:08:12 ha-278127 kubelet[805]: I1126 20:08:12.929036     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-278127 -n ha-278127
helpers_test.go:269: (dbg) Run:  kubectl --context ha-278127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-l9p24 busybox-7b57f96db7-rcsd2
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-278127 describe pod busybox-7b57f96db7-l9p24 busybox-7b57f96db7-rcsd2
helpers_test.go:290: (dbg) kubectl --context ha-278127 describe pod busybox-7b57f96db7-l9p24 busybox-7b57f96db7-rcsd2:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-l9p24
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jltdj (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-jltdj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  20s   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	
	
	Name:             busybox-7b57f96db7-rcsd2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zn4mp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-zn4mp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  9s    default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (5.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 node add --control-plane --alsologtostderr -v 5: (1m19.107573231s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5: exit status 7 (848.664909ms)

                                                
                                                
-- stdout --
	ha-278127
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-278127-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-278127-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	ha-278127-m05
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:15:48.505319   79398 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:15:48.505494   79398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:15:48.505501   79398 out.go:374] Setting ErrFile to fd 2...
	I1126 20:15:48.505508   79398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:15:48.505759   79398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:15:48.505976   79398 out.go:368] Setting JSON to false
	I1126 20:15:48.506012   79398 mustload.go:66] Loading cluster: ha-278127
	I1126 20:15:48.506091   79398 notify.go:221] Checking for updates...
	I1126 20:15:48.507048   79398 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:15:48.507070   79398 status.go:174] checking status of ha-278127 ...
	I1126 20:15:48.507649   79398 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:15:48.528429   79398 status.go:371] ha-278127 host status = "Running" (err=<nil>)
	I1126 20:15:48.528453   79398 host.go:66] Checking if "ha-278127" exists ...
	I1126 20:15:48.528754   79398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:15:48.564593   79398 host.go:66] Checking if "ha-278127" exists ...
	I1126 20:15:48.564897   79398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:15:48.564998   79398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:15:48.585007   79398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:15:48.695499   79398 ssh_runner.go:195] Run: systemctl --version
	I1126 20:15:48.703078   79398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:15:48.719077   79398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:15:48.801737   79398 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-26 20:15:48.791721539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:15:48.802276   79398 kubeconfig.go:125] found "ha-278127" server: "https://192.168.49.254:8443"
	I1126 20:15:48.802315   79398 api_server.go:166] Checking apiserver status ...
	I1126 20:15:48.802387   79398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:15:48.814609   79398 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/964/cgroup
	I1126 20:15:48.822829   79398 api_server.go:182] apiserver freezer: "4:freezer:/docker/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/crio/crio-f5647f1652cc11a195a49a98906391e791c3136916a5e3c249907585088fad42"
	I1126 20:15:48.822921   79398 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/crio/crio-f5647f1652cc11a195a49a98906391e791c3136916a5e3c249907585088fad42/freezer.state
	I1126 20:15:48.831546   79398 api_server.go:204] freezer state: "THAWED"
	I1126 20:15:48.831572   79398 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1126 20:15:48.839920   79398 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1126 20:15:48.839953   79398 status.go:463] ha-278127 apiserver status = Running (err=<nil>)
	I1126 20:15:48.839965   79398 status.go:176] ha-278127 status: &{Name:ha-278127 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:15:48.839981   79398 status.go:174] checking status of ha-278127-m02 ...
	I1126 20:15:48.840274   79398 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:15:48.869228   79398 status.go:371] ha-278127-m02 host status = "Running" (err=<nil>)
	I1126 20:15:48.869254   79398 host.go:66] Checking if "ha-278127-m02" exists ...
	I1126 20:15:48.869563   79398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:15:48.888130   79398 host.go:66] Checking if "ha-278127-m02" exists ...
	I1126 20:15:48.888436   79398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:15:48.888480   79398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:15:48.906930   79398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:15:49.015169   79398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:15:49.028365   79398 kubeconfig.go:125] found "ha-278127" server: "https://192.168.49.254:8443"
	I1126 20:15:49.028391   79398 api_server.go:166] Checking apiserver status ...
	I1126 20:15:49.028435   79398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1126 20:15:49.038259   79398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:15:49.038285   79398 status.go:463] ha-278127-m02 apiserver status = Running (err=<nil>)
	I1126 20:15:49.038294   79398 status.go:176] ha-278127-m02 status: &{Name:ha-278127-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:15:49.038310   79398 status.go:174] checking status of ha-278127-m04 ...
	I1126 20:15:49.038619   79398 cli_runner.go:164] Run: docker container inspect ha-278127-m04 --format={{.State.Status}}
	I1126 20:15:49.059733   79398 status.go:371] ha-278127-m04 host status = "Stopped" (err=<nil>)
	I1126 20:15:49.059753   79398 status.go:384] host is not running, skipping remaining checks
	I1126 20:15:49.059821   79398 status.go:176] ha-278127-m04 status: &{Name:ha-278127-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:15:49.059875   79398 status.go:174] checking status of ha-278127-m05 ...
	I1126 20:15:49.060285   79398 cli_runner.go:164] Run: docker container inspect ha-278127-m05 --format={{.State.Status}}
	I1126 20:15:49.083620   79398 status.go:371] ha-278127-m05 host status = "Running" (err=<nil>)
	I1126 20:15:49.083647   79398 host.go:66] Checking if "ha-278127-m05" exists ...
	I1126 20:15:49.083971   79398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m05
	I1126 20:15:49.103183   79398 host.go:66] Checking if "ha-278127-m05" exists ...
	I1126 20:15:49.103552   79398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:15:49.103620   79398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m05
	I1126 20:15:49.122419   79398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m05/id_rsa Username:docker}
	I1126 20:15:49.227397   79398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:15:49.246744   79398 kubeconfig.go:125] found "ha-278127" server: "https://192.168.49.254:8443"
	I1126 20:15:49.246772   79398 api_server.go:166] Checking apiserver status ...
	I1126 20:15:49.246813   79398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:15:49.258879   79398 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1205/cgroup
	I1126 20:15:49.268192   79398 api_server.go:182] apiserver freezer: "4:freezer:/docker/5b6efc2def53342deabe82239e22a49a19872c75be9a715be8ad81d703b9bc41/crio/crio-f8718b64c91ef4694a149d8360faef4f6c2d2c9039c05ad2c791522d95d40648"
	I1126 20:15:49.268263   79398 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5b6efc2def53342deabe82239e22a49a19872c75be9a715be8ad81d703b9bc41/crio/crio-f8718b64c91ef4694a149d8360faef4f6c2d2c9039c05ad2c791522d95d40648/freezer.state
	I1126 20:15:49.276166   79398 api_server.go:204] freezer state: "THAWED"
	I1126 20:15:49.276195   79398 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1126 20:15:49.284429   79398 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1126 20:15:49.284462   79398 status.go:463] ha-278127-m05 apiserver status = Running (err=<nil>)
	I1126 20:15:49.284472   79398 status.go:176] ha-278127-m05 status: &{Name:ha-278127-m05 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:615: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-278127
helpers_test.go:243: (dbg) docker inspect ha-278127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd",
	        "Created": "2025-11-26T19:57:51.94382214Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 60086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:06:25.13540784Z",
	            "FinishedAt": "2025-11-26T20:06:24.397214575Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/hosts",
	        "LogPath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd-json.log",
	        "Name": "/ha-278127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-278127:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-278127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd",
	                "LowerDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-278127",
	                "Source": "/var/lib/docker/volumes/ha-278127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-278127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-278127",
	                "name.minikube.sigs.k8s.io": "ha-278127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb3aaf333e9f66a1f0a54705c2952cf94a31e67f170d0e073ad505006b4613f7",
	            "SandboxKey": "/var/run/docker/netns/cb3aaf333e9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-278127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:6e:15:9f:21:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "20cb65a83ad57cf8581cf982a5b25f381be527698b87a783139e32a436f750e9",
	                    "EndpointID": "217fa13f4a876f9a733e9c88a45d94a8aabe2f981d6e4c092ca2c647767455d3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-278127",
	                        "0081e5a17ed5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-278127 -n ha-278127
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 logs -n 25: (2.789068749s)
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-278127 ssh -n ha-278127-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test_ha-278127-m03_ha-278127-m04.txt                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp testdata/cp-test.txt ha-278127-m04:/home/docker/cp-test.txt                                                             │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2837002730/001/cp-test_ha-278127-m04.txt │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127:/home/docker/cp-test_ha-278127-m04_ha-278127.txt                       │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127 sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127.txt                                                 │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127-m02:/home/docker/cp-test_ha-278127-m04_ha-278127-m02.txt               │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m02 sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127-m02.txt                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127-m03:/home/docker/cp-test_ha-278127-m04_ha-278127-m03.txt               │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m03 sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127-m03.txt                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ node    │ ha-278127 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ node    │ ha-278127 node start m02 --alsologtostderr -v 5                                                                                      │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:03 UTC │
	│ node    │ ha-278127 node list --alsologtostderr -v 5                                                                                           │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:03 UTC │                     │
	│ stop    │ ha-278127 stop --alsologtostderr -v 5                                                                                                │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:03 UTC │ 26 Nov 25 20:04 UTC │
	│ start   │ ha-278127 start --wait true --alsologtostderr -v 5                                                                                   │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:04 UTC │ 26 Nov 25 20:05 UTC │
	│ node    │ ha-278127 node list --alsologtostderr -v 5                                                                                           │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:05 UTC │                     │
	│ node    │ ha-278127 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:05 UTC │ 26 Nov 25 20:05 UTC │
	│ stop    │ ha-278127 stop --alsologtostderr -v 5                                                                                                │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:05 UTC │ 26 Nov 25 20:06 UTC │
	│ start   │ ha-278127 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:06 UTC │                     │
	│ node    │ ha-278127 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:14 UTC │ 26 Nov 25 20:15 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:06:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:06:24.854734   59960 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:06:24.854900   59960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:06:24.854911   59960 out.go:374] Setting ErrFile to fd 2...
	I1126 20:06:24.854917   59960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:06:24.855178   59960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:06:24.855529   59960 out.go:368] Setting JSON to false
	I1126 20:06:24.856339   59960 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2915,"bootTime":1764184670,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:06:24.856415   59960 start.go:143] virtualization:  
	I1126 20:06:24.859567   59960 out.go:179] * [ha-278127] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:06:24.863328   59960 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:06:24.863432   59960 notify.go:221] Checking for updates...
	I1126 20:06:24.869239   59960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:06:24.872146   59960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:24.874915   59960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:06:24.877742   59960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:06:24.880612   59960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:06:24.883943   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:24.884479   59960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:06:24.917824   59960 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:06:24.917967   59960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:06:24.982581   59960 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-26 20:06:24.973603153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:06:24.982686   59960 docker.go:319] overlay module found
	I1126 20:06:24.986072   59960 out.go:179] * Using the docker driver based on existing profile
	I1126 20:06:24.989065   59960 start.go:309] selected driver: docker
	I1126 20:06:24.989102   59960 start.go:927] validating driver "docker" against &{Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:24.989232   59960 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:06:24.989341   59960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:06:25.048426   59960 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-26 20:06:25.038525674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:06:25.048890   59960 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:06:25.048924   59960 cni.go:84] Creating CNI manager for ""
	I1126 20:06:25.048991   59960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1126 20:06:25.049039   59960 start.go:353] cluster config:
	{Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:25.052236   59960 out.go:179] * Starting "ha-278127" primary control-plane node in "ha-278127" cluster
	I1126 20:06:25.055057   59960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:06:25.058039   59960 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:06:25.061008   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:25.061089   59960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:06:25.061106   59960 cache.go:65] Caching tarball of preloaded images
	I1126 20:06:25.061005   59960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:06:25.061198   59960 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:06:25.061210   59960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:06:25.061353   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:25.080808   59960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:06:25.080831   59960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:06:25.080846   59960 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:06:25.080876   59960 start.go:360] acquireMachinesLock for ha-278127: {Name:mkb106a4eb425a1b9d0e59976741b3f940666d17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:06:25.080933   59960 start.go:364] duration metric: took 35.659µs to acquireMachinesLock for "ha-278127"
	I1126 20:06:25.080951   59960 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:06:25.080956   59960 fix.go:54] fixHost starting: 
	I1126 20:06:25.081217   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:06:25.097737   59960 fix.go:112] recreateIfNeeded on ha-278127: state=Stopped err=<nil>
	W1126 20:06:25.097772   59960 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:06:25.101061   59960 out.go:252] * Restarting existing docker container for "ha-278127" ...
	I1126 20:06:25.101155   59960 cli_runner.go:164] Run: docker start ha-278127
	I1126 20:06:25.385420   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:06:25.411970   59960 kic.go:430] container "ha-278127" state is running.
	I1126 20:06:25.412392   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:25.431941   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:25.432192   59960 machine.go:94] provisionDockerMachine start ...
	I1126 20:06:25.432251   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:25.452939   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:25.453252   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:25.453261   59960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:06:25.454097   59960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44664->127.0.0.1:32828: read: connection reset by peer
	I1126 20:06:28.605461   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127
	
	I1126 20:06:28.605490   59960 ubuntu.go:182] provisioning hostname "ha-278127"
	I1126 20:06:28.605558   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:28.623455   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:28.623769   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:28.623786   59960 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-278127 && echo "ha-278127" | sudo tee /etc/hostname
	I1126 20:06:28.778155   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127
	
	I1126 20:06:28.778256   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:28.794949   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:28.795250   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:28.795271   59960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-278127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-278127/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-278127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:06:28.942212   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:06:28.942238   59960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:06:28.942272   59960 ubuntu.go:190] setting up certificates
	I1126 20:06:28.942281   59960 provision.go:84] configureAuth start
	I1126 20:06:28.942355   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:28.960559   59960 provision.go:143] copyHostCerts
	I1126 20:06:28.960617   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:28.960653   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:06:28.960666   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:28.960744   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:06:28.960844   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:28.960866   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:06:28.960877   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:28.960906   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:06:28.960964   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:28.960985   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:06:28.960993   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:28.961023   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:06:28.961088   59960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.ha-278127 san=[127.0.0.1 192.168.49.2 ha-278127 localhost minikube]
	I1126 20:06:29.153972   59960 provision.go:177] copyRemoteCerts
	I1126 20:06:29.154049   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:06:29.154092   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.171236   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.273352   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1126 20:06:29.273420   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:06:29.290237   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1126 20:06:29.290299   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1126 20:06:29.307794   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1126 20:06:29.307855   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:06:29.325356   59960 provision.go:87] duration metric: took 383.045342ms to configureAuth
	I1126 20:06:29.325387   59960 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:06:29.325626   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:29.325742   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.342790   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:29.343103   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:29.343131   59960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:06:29.721722   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:06:29.721744   59960 machine.go:97] duration metric: took 4.28954331s to provisionDockerMachine
	I1126 20:06:29.721770   59960 start.go:293] postStartSetup for "ha-278127" (driver="docker")
	I1126 20:06:29.721791   59960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:06:29.721855   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:06:29.721907   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.742288   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.845365   59960 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:06:29.848307   59960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:06:29.848344   59960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:06:29.848355   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:06:29.848405   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:06:29.848509   59960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:06:29.848521   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /etc/ssl/certs/41292.pem
	I1126 20:06:29.848614   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:06:29.855777   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:29.872505   59960 start.go:296] duration metric: took 150.71913ms for postStartSetup
	I1126 20:06:29.872582   59960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:06:29.872629   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.889019   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.990934   59960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:06:29.995268   59960 fix.go:56] duration metric: took 4.914304894s for fixHost
	I1126 20:06:29.995338   59960 start.go:83] releasing machines lock for "ha-278127", held for 4.914396494s
	I1126 20:06:29.995443   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:30.012377   59960 ssh_runner.go:195] Run: cat /version.json
	I1126 20:06:30.012396   59960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:06:30.012433   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:30.012448   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:30.031079   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:30.032530   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:30.145909   59960 ssh_runner.go:195] Run: systemctl --version
	I1126 20:06:30.239511   59960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:06:30.276317   59960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:06:30.280821   59960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:06:30.280919   59960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:06:30.288826   59960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:06:30.288852   59960 start.go:496] detecting cgroup driver to use...
	I1126 20:06:30.288908   59960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:06:30.288973   59960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:06:30.304277   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:06:30.316900   59960 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:06:30.316968   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:06:30.332722   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:06:30.345857   59960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:06:30.458910   59960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:06:30.568914   59960 docker.go:234] disabling docker service ...
	I1126 20:06:30.568992   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:06:30.584111   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:06:30.596826   59960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:06:30.712581   59960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:06:30.831709   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:06:30.843921   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:06:30.857895   59960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:06:30.858007   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.867693   59960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:06:30.867809   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.876639   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.885174   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.893801   59960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:06:30.901606   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.910405   59960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.918408   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.927292   59960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:06:30.934726   59960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:06:30.941996   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:31.058637   59960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:06:31.242820   59960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:06:31.242889   59960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:06:31.246945   59960 start.go:564] Will wait 60s for crictl version
	I1126 20:06:31.247023   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:06:31.250523   59960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:06:31.274233   59960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:06:31.274317   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:06:31.302783   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:06:31.335292   59960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:06:31.338152   59960 cli_runner.go:164] Run: docker network inspect ha-278127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:06:31.354467   59960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 20:06:31.358251   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:06:31.368693   59960 kubeadm.go:884] updating cluster {Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:06:31.368839   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:31.368891   59960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:06:31.403727   59960 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:06:31.403752   59960 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:06:31.404010   59960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:06:31.431423   59960 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:06:31.431446   59960 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:06:31.431457   59960 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1126 20:06:31.431560   59960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-278127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:06:31.431642   59960 ssh_runner.go:195] Run: crio config
	I1126 20:06:31.500147   59960 cni.go:84] Creating CNI manager for ""
	I1126 20:06:31.500186   59960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1126 20:06:31.500211   59960 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:06:31.500236   59960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-278127 NodeName:ha-278127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:06:31.500354   59960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-278127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:06:31.500372   59960 kube-vip.go:115] generating kube-vip config ...
	I1126 20:06:31.500428   59960 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1126 20:06:31.512046   59960 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:06:31.512210   59960 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1126 20:06:31.512299   59960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:06:31.519877   59960 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:06:31.519973   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1126 20:06:31.527497   59960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1126 20:06:31.540828   59960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:06:31.553623   59960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1126 20:06:31.566105   59960 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1126 20:06:31.578838   59960 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1126 20:06:31.582461   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:06:31.592186   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:31.707439   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:06:31.722268   59960 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127 for IP: 192.168.49.2
	I1126 20:06:31.722291   59960 certs.go:195] generating shared ca certs ...
	I1126 20:06:31.722307   59960 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:31.722445   59960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:06:31.722497   59960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:06:31.722508   59960 certs.go:257] generating profile certs ...
	I1126 20:06:31.722593   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key
	I1126 20:06:31.722624   59960 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab
	I1126 20:06:31.722643   59960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1126 20:06:32.010576   59960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab ...
	I1126 20:06:32.010610   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab: {Name:mk952cf244227c47330a0f303648b46942398499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.010819   59960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab ...
	I1126 20:06:32.010835   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab: {Name:mk44577b028f8c1bee471863ff089cc458df619d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.010930   59960 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt
	I1126 20:06:32.011078   59960 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key
	I1126 20:06:32.011225   59960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key
	I1126 20:06:32.011244   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1126 20:06:32.011263   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1126 20:06:32.011280   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1126 20:06:32.011297   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1126 20:06:32.011315   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1126 20:06:32.011331   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1126 20:06:32.011348   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1126 20:06:32.011362   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1126 20:06:32.011414   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:06:32.011456   59960 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:06:32.011469   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:06:32.011501   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:06:32.011530   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:06:32.011558   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:06:32.011608   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:32.011640   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.011656   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.011666   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem -> /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.012331   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:06:32.032881   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:06:32.054562   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:06:32.072828   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:06:32.091195   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:06:32.109160   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:06:32.126721   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:06:32.143729   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:06:32.162210   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:06:32.179022   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:06:32.196402   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:06:32.213770   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:06:32.227414   59960 ssh_runner.go:195] Run: openssl version
	I1126 20:06:32.233654   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:06:32.243718   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.247376   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.247448   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.289532   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:06:32.297668   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:06:32.306080   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.309793   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.309880   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.353652   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:06:32.364544   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:06:32.373430   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.381651   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.381803   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.434961   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:06:32.448704   59960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:06:32.454552   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:06:32.518905   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:06:32.599420   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:06:32.673604   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:06:32.734602   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:06:32.794948   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:06:32.842245   59960 kubeadm.go:401] StartCluster: {Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:32.842417   59960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:06:32.842512   59960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:06:32.887488   59960 cri.go:89] found id: "f5647f1652cc11a195a49a98906391e791c3136916a5e3c249907585088fad42"
	I1126 20:06:32.887548   59960 cri.go:89] found id: "1ed2c42e7047cc402ab04fdadafa16acc5208b12eede0475826c97d34c9a071f"
	I1126 20:06:32.887577   59960 cri.go:89] found id: "040a8549001808f2d3fce3d4cf9f8dff272706173960c5e8004af8b1ea042e80"
	I1126 20:06:32.887595   59960 cri.go:89] found id: "106da3c0ad4fa03ae491f571375cda1a123fe52e6f7ef39170a84c273267c713"
	I1126 20:06:32.887614   59960 cri.go:89] found id: "cdc1651fea8f10bd665928dcc7bb174b74385eb06e911da9629df17c0d9d29e8"
	I1126 20:06:32.887650   59960 cri.go:89] found id: ""
	I1126 20:06:32.887728   59960 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:06:32.910884   59960 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:06:32Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:06:32.911021   59960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:06:32.933474   59960 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:06:32.933554   59960 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:06:32.933631   59960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:06:32.956246   59960 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:06:32.956760   59960 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-278127" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:32.956919   59960 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-2326/kubeconfig needs updating (will repair): [kubeconfig missing "ha-278127" cluster setting kubeconfig missing "ha-278127" context setting]
	I1126 20:06:32.957299   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.957946   59960 kapi.go:59] client config for ha-278127: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:06:32.958772   59960 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1126 20:06:32.958857   59960 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1126 20:06:32.958878   59960 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1126 20:06:32.958921   59960 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1126 20:06:32.958940   59960 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1126 20:06:32.958837   59960 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1126 20:06:32.959354   59960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:06:32.974056   59960 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1126 20:06:32.974125   59960 kubeadm.go:602] duration metric: took 40.551528ms to restartPrimaryControlPlane
	I1126 20:06:32.974150   59960 kubeadm.go:403] duration metric: took 131.91251ms to StartCluster
	I1126 20:06:32.974180   59960 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.974282   59960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:32.974978   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.975243   59960 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:06:32.975297   59960 start.go:242] waiting for startup goroutines ...
	I1126 20:06:32.975325   59960 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:06:32.975918   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:32.981231   59960 out.go:179] * Enabled addons: 
	I1126 20:06:32.984100   59960 addons.go:530] duration metric: took 8.777007ms for enable addons: enabled=[]
	I1126 20:06:32.984180   59960 start.go:247] waiting for cluster config update ...
	I1126 20:06:32.984203   59960 start.go:256] writing updated cluster config ...
	I1126 20:06:32.987492   59960 out.go:203] 
	I1126 20:06:32.990613   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:32.990800   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:32.994017   59960 out.go:179] * Starting "ha-278127-m02" control-plane node in "ha-278127" cluster
	I1126 20:06:32.996802   59960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:06:32.999792   59960 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:06:33.002700   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:33.002740   59960 cache.go:65] Caching tarball of preloaded images
	I1126 20:06:33.002860   59960 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:06:33.002893   59960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:06:33.003031   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:33.003254   59960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:06:33.039303   59960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:06:33.039323   59960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:06:33.039336   59960 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:06:33.039360   59960 start.go:360] acquireMachinesLock for ha-278127-m02: {Name:mkfa715e07e067116cf6c4854164186af5a39436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:06:33.039417   59960 start.go:364] duration metric: took 41.518µs to acquireMachinesLock for "ha-278127-m02"
	I1126 20:06:33.039439   59960 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:06:33.039445   59960 fix.go:54] fixHost starting: m02
	I1126 20:06:33.039721   59960 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:06:33.071417   59960 fix.go:112] recreateIfNeeded on ha-278127-m02: state=Stopped err=<nil>
	W1126 20:06:33.071449   59960 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:06:33.074580   59960 out.go:252] * Restarting existing docker container for "ha-278127-m02" ...
	I1126 20:06:33.074664   59960 cli_runner.go:164] Run: docker start ha-278127-m02
	I1126 20:06:33.452368   59960 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:06:33.483474   59960 kic.go:430] container "ha-278127-m02" state is running.
	I1126 20:06:33.483869   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:33.512602   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:33.512851   59960 machine.go:94] provisionDockerMachine start ...
	I1126 20:06:33.512917   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:33.539611   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:33.539907   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:33.539915   59960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:06:33.540557   59960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35216->127.0.0.1:32833: read: connection reset by peer
	I1126 20:06:36.755151   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127-m02
	
	I1126 20:06:36.755173   59960 ubuntu.go:182] provisioning hostname "ha-278127-m02"
	I1126 20:06:36.755238   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:36.783610   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:36.783923   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:36.783950   59960 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-278127-m02 && echo "ha-278127-m02" | sudo tee /etc/hostname
	I1126 20:06:37.026368   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127-m02
	
	I1126 20:06:37.026488   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:37.056257   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:37.056574   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:37.056592   59960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-278127-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-278127-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-278127-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:06:37.278605   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:06:37.278692   59960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:06:37.278724   59960 ubuntu.go:190] setting up certificates
	I1126 20:06:37.278764   59960 provision.go:84] configureAuth start
	I1126 20:06:37.278849   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:37.306165   59960 provision.go:143] copyHostCerts
	I1126 20:06:37.306207   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:37.306246   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:06:37.306253   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:37.306332   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:06:37.306421   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:37.306441   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:06:37.306445   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:37.306474   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:06:37.306512   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:37.306528   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:06:37.306532   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:37.306553   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:06:37.306602   59960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.ha-278127-m02 san=[127.0.0.1 192.168.49.3 ha-278127-m02 localhost minikube]
	I1126 20:06:37.781886   59960 provision.go:177] copyRemoteCerts
	I1126 20:06:37.782050   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:06:37.782113   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:37.799978   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:37.920744   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1126 20:06:37.920800   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:06:37.946353   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1126 20:06:37.946424   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 20:06:37.990628   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1126 20:06:37.990734   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:06:38.022932   59960 provision.go:87] duration metric: took 744.14174ms to configureAuth
	I1126 20:06:38.022999   59960 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:06:38.023281   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:38.023419   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:38.055902   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:38.056219   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:38.056232   59960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:06:39.163004   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:06:39.163066   59960 machine.go:97] duration metric: took 5.650194842s to provisionDockerMachine
	I1126 20:06:39.163087   59960 start.go:293] postStartSetup for "ha-278127-m02" (driver="docker")
	I1126 20:06:39.163098   59960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:06:39.163204   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:06:39.163258   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.194111   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.327619   59960 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:06:39.331483   59960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:06:39.331507   59960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:06:39.331518   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:06:39.331574   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:06:39.331649   59960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:06:39.331655   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /etc/ssl/certs/41292.pem
	I1126 20:06:39.331756   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:06:39.344886   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:39.377797   59960 start.go:296] duration metric: took 214.695598ms for postStartSetup
	I1126 20:06:39.377880   59960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:06:39.377991   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.402878   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.525023   59960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:06:39.531527   59960 fix.go:56] duration metric: took 6.492076268s for fixHost
	I1126 20:06:39.531551   59960 start.go:83] releasing machines lock for "ha-278127-m02", held for 6.492125467s
	I1126 20:06:39.531622   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:39.571062   59960 out.go:179] * Found network options:
	I1126 20:06:39.574101   59960 out.go:179]   - NO_PROXY=192.168.49.2
	W1126 20:06:39.577135   59960 proxy.go:120] fail to check proxy env: Error ip not in block
	W1126 20:06:39.577189   59960 proxy.go:120] fail to check proxy env: Error ip not in block
	I1126 20:06:39.577283   59960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:06:39.577298   59960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:06:39.577325   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.577353   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.610149   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.618182   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.847910   59960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:06:39.986067   59960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:06:39.986218   59960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:06:40.010567   59960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:06:40.010651   59960 start.go:496] detecting cgroup driver to use...
	I1126 20:06:40.010701   59960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:06:40.010777   59960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:06:40.066499   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:06:40.113187   59960 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:06:40.113357   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:06:40.138505   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:06:40.165558   59960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:06:40.434812   59960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:06:40.667360   59960 docker.go:234] disabling docker service ...
	I1126 20:06:40.667485   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:06:40.689020   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:06:40.712251   59960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:06:41.062262   59960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:06:41.446879   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:06:41.479018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:06:41.522736   59960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:06:41.522836   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.550554   59960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:06:41.550640   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.568877   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.605965   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.634535   59960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:06:41.647439   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.679616   59960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.700895   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.724575   59960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:06:41.743621   59960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:06:41.761053   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:42.179518   59960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:08:12.654700   59960 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.475140858s)
	I1126 20:08:12.654725   59960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:08:12.654777   59960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:08:12.658561   59960 start.go:564] Will wait 60s for crictl version
	I1126 20:08:12.658629   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:08:12.662122   59960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:08:12.694230   59960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:08:12.694320   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:08:12.723516   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:08:12.752895   59960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:08:12.755800   59960 out.go:179]   - env NO_PROXY=192.168.49.2
	I1126 20:08:12.758681   59960 cli_runner.go:164] Run: docker network inspect ha-278127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:08:12.774831   59960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 20:08:12.778729   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:08:12.788193   59960 mustload.go:66] Loading cluster: ha-278127
	I1126 20:08:12.788437   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:08:12.788732   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:08:12.805367   59960 host.go:66] Checking if "ha-278127" exists ...
	I1126 20:08:12.805673   59960 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127 for IP: 192.168.49.3
	I1126 20:08:12.805688   59960 certs.go:195] generating shared ca certs ...
	I1126 20:08:12.805703   59960 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:08:12.805829   59960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:08:12.805875   59960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:08:12.805885   59960 certs.go:257] generating profile certs ...
	I1126 20:08:12.806061   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key
	I1126 20:08:12.806134   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.28ad082f
	I1126 20:08:12.806177   59960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key
	I1126 20:08:12.806189   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1126 20:08:12.806203   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1126 20:08:12.806214   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1126 20:08:12.806227   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1126 20:08:12.806238   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1126 20:08:12.806249   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1126 20:08:12.806265   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1126 20:08:12.806276   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1126 20:08:12.806330   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:08:12.806364   59960 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:08:12.806376   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:08:12.806404   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:08:12.806431   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:08:12.806458   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:08:12.806505   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:08:12.806543   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:12.806557   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem -> /usr/share/ca-certificates/4129.pem
	I1126 20:08:12.806568   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /usr/share/ca-certificates/41292.pem
	I1126 20:08:12.806631   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:08:12.824408   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:08:12.926228   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1126 20:08:12.930801   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1126 20:08:12.939401   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1126 20:08:12.947934   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1126 20:08:12.960335   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1126 20:08:12.964526   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1126 20:08:12.973104   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1126 20:08:12.978204   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1126 20:08:12.987576   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1126 20:08:12.991901   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1126 20:08:13.001289   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1126 20:08:13.006200   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1126 20:08:13.014443   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:08:13.039341   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:08:13.063520   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:08:13.085219   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:08:13.103037   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:08:13.123095   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:08:13.140681   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:08:13.160781   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:08:13.180406   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:08:13.200475   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:08:13.221024   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:08:13.239900   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1126 20:08:13.254738   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1126 20:08:13.269631   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1126 20:08:13.285317   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1126 20:08:13.300359   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1126 20:08:13.320893   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1126 20:08:13.340300   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1126 20:08:13.361527   59960 ssh_runner.go:195] Run: openssl version
	I1126 20:08:13.368555   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:08:13.377244   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.381511   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.381624   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.427936   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:08:13.437023   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:08:13.445274   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.449571   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.449682   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.496315   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:08:13.504808   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:08:13.513181   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.517313   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.517396   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.579337   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:08:13.588179   59960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:08:13.593330   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:08:13.645107   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:08:13.691020   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:08:13.735436   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:08:13.780762   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:08:13.830095   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:08:13.873290   59960 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1126 20:08:13.873415   59960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-278127-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:08:13.873445   59960 kube-vip.go:115] generating kube-vip config ...
	I1126 20:08:13.873508   59960 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1126 20:08:13.885513   59960 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:08:13.885577   59960 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1126 20:08:13.885657   59960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:08:13.893550   59960 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:08:13.893628   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1126 20:08:13.901912   59960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1126 20:08:13.916015   59960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:08:13.934936   59960 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1126 20:08:13.979363   59960 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1126 20:08:13.991396   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:08:14.018397   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:08:14.385132   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:08:14.402828   59960 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:08:14.403147   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:08:14.408967   59960 out.go:179] * Verifying Kubernetes components...
	I1126 20:08:14.411916   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:08:14.659853   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:08:14.678979   59960 kapi.go:59] client config for ha-278127: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1126 20:08:14.679061   59960 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1126 20:08:14.679322   59960 node_ready.go:35] waiting up to 6m0s for node "ha-278127-m02" to be "Ready" ...
	I1126 20:08:15.269402   59960 node_ready.go:49] node "ha-278127-m02" is "Ready"
	I1126 20:08:15.269438   59960 node_ready.go:38] duration metric: took 590.083677ms for node "ha-278127-m02" to be "Ready" ...
	I1126 20:08:15.269450   59960 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:08:15.269508   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:15.770378   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:16.271005   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:16.769624   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:17.269646   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:17.770292   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:18.270233   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:18.770225   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:19.269626   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:19.770251   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:20.270592   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:20.769691   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:21.269742   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:21.769575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:22.269640   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:22.770094   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:23.269745   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:23.770093   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:24.269839   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:24.770626   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:25.270510   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:25.770352   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:26.270238   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:26.770199   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:27.270553   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:27.770570   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:28.269631   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:28.770575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:29.269663   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:29.770438   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:30.269733   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:30.769570   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:31.269688   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:31.770556   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:32.270505   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:32.770152   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:33.269716   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:33.769765   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:34.269659   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:34.769641   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:35.269866   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:35.770030   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:36.270158   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:36.770014   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:37.270234   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:37.769610   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:38.270567   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:38.770558   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:39.269653   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:39.769895   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:40.270407   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:40.769781   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:41.270338   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:41.770411   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:42.269686   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:42.770028   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:43.269580   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:43.769636   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:44.269684   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:44.769627   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:45.272055   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:45.770418   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:46.269657   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:46.770575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:47.270036   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:47.770377   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:48.270502   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:48.770450   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:49.269719   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:49.770449   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:50.269903   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:50.769675   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:51.270539   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:51.770618   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:52.270336   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:52.770354   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:53.270340   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:53.769901   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:54.270054   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:54.769747   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:55.270283   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:55.770525   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:56.269881   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:56.769908   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:57.269834   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:57.769631   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:58.270414   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:58.770529   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:59.269820   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:59.770577   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:00.269749   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:00.770275   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:01.270165   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:01.769910   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:02.269673   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:02.770492   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:03.270339   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:03.769642   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:04.269668   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:04.770177   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:05.270062   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:05.770571   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:06.270286   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:06.770466   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:07.269878   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:07.770593   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:08.270292   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:08.770068   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:09.269767   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:09.769619   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:10.270146   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:10.769659   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:11.270311   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:11.770596   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:12.269893   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:12.769649   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:13.270341   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:13.770530   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:14.269596   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:14.769532   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:14.769644   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:14.805181   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:14.805204   59960 cri.go:89] found id: ""
	I1126 20:09:14.805213   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:14.805269   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.809129   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:14.809206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:14.835451   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:14.835475   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:14.835480   59960 cri.go:89] found id: ""
	I1126 20:09:14.835487   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:14.835543   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.839249   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.842501   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:14.842574   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:14.867922   59960 cri.go:89] found id: ""
	I1126 20:09:14.867948   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.867957   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:14.867963   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:14.868022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:14.893599   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:14.893625   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:14.893630   59960 cri.go:89] found id: ""
	I1126 20:09:14.893638   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:14.893730   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.897540   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.901438   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:14.901540   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:14.929244   59960 cri.go:89] found id: ""
	I1126 20:09:14.929268   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.929277   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:14.929284   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:14.929340   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:14.956242   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:14.956264   59960 cri.go:89] found id: ""
	I1126 20:09:14.956272   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:14.956326   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.960197   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:14.960271   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:14.985332   59960 cri.go:89] found id: ""
	I1126 20:09:14.985407   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.985428   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:14.985455   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:14.985495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:15.015412   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:15.015491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:15.446082   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:15.438231    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.438877    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440458    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440891    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.442380    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:15.438231    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.438877    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440458    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440891    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.442380    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:15.446107   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:15.446122   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:15.474426   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:15.474452   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:15.514330   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:15.514364   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:15.582633   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:15.582662   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:15.636475   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:15.636508   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:15.718181   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:15.718215   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:15.814217   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:15.814253   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:15.826793   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:15.826823   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:15.854520   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:15.854550   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.382038   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:18.401602   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:18.401678   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:18.435808   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:18.435831   59960 cri.go:89] found id: ""
	I1126 20:09:18.435839   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:18.435907   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.439686   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:18.439801   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:18.476740   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:18.476764   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:18.476770   59960 cri.go:89] found id: ""
	I1126 20:09:18.476787   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:18.476889   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.480732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.484682   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:18.484783   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:18.511910   59960 cri.go:89] found id: ""
	I1126 20:09:18.511974   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.511989   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:18.511996   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:18.512055   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:18.547921   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:18.547988   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:18.548006   59960 cri.go:89] found id: ""
	I1126 20:09:18.548014   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:18.548071   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.552076   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.556982   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:18.557066   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:18.587286   59960 cri.go:89] found id: ""
	I1126 20:09:18.587313   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.587333   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:18.587340   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:18.587401   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:18.620541   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.620559   59960 cri.go:89] found id: ""
	I1126 20:09:18.620567   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:18.620626   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.624723   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:18.624796   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:18.653037   59960 cri.go:89] found id: ""
	I1126 20:09:18.653060   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.653068   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:18.653077   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:18.653090   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.684308   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:18.684335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:18.776764   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:18.776798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:18.865581   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:18.856655    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858014    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858939    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.859710    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.861248    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:18.856655    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858014    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858939    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.859710    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.861248    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:18.865603   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:18.865616   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:18.909234   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:18.909270   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:18.960436   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:18.960477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:18.990735   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:18.990766   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:19.069643   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:19.069722   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:19.104112   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:19.104137   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:19.118175   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:19.118204   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:19.148200   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:19.148229   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:21.687827   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:21.698536   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:21.698621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:21.730147   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:21.730171   59960 cri.go:89] found id: ""
	I1126 20:09:21.730180   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:21.730235   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.735922   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:21.736012   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:21.763452   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:21.763481   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:21.763486   59960 cri.go:89] found id: ""
	I1126 20:09:21.763494   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:21.763551   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.767451   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.771041   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:21.771140   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:21.803663   59960 cri.go:89] found id: ""
	I1126 20:09:21.803688   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.803697   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:21.803703   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:21.803767   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:21.832470   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:21.832496   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:21.832501   59960 cri.go:89] found id: ""
	I1126 20:09:21.832510   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:21.832567   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.836410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.840076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:21.840157   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:21.866968   59960 cri.go:89] found id: ""
	I1126 20:09:21.866994   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.867004   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:21.867011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:21.867093   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:21.892977   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:21.893000   59960 cri.go:89] found id: ""
	I1126 20:09:21.893008   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:21.893083   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.896906   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:21.897019   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:21.923720   59960 cri.go:89] found id: ""
	I1126 20:09:21.923744   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.923753   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:21.923762   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:21.923793   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:22.011751   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:22.003342    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.003880    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.005519    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.006189    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.007784    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:22.003342    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.003880    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.005519    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.006189    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.007784    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:22.011856   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:22.011890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:22.042091   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:22.042121   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:22.079857   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:22.079886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:22.179933   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:22.179973   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:22.207540   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:22.207568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:22.263434   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:22.263465   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:22.313145   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:22.313180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:22.365142   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:22.365177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:22.446886   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:22.446920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:22.483927   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:22.483961   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:24.996823   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:25.007913   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:25.007987   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:25.044777   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:25.044801   59960 cri.go:89] found id: ""
	I1126 20:09:25.044810   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:25.044870   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.048843   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:25.048923   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:25.083120   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:25.083187   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:25.083197   59960 cri.go:89] found id: ""
	I1126 20:09:25.083205   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:25.083271   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.086865   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.090526   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:25.090596   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:25.118710   59960 cri.go:89] found id: ""
	I1126 20:09:25.118735   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.118745   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:25.118752   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:25.118809   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:25.145818   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:25.145843   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:25.145850   59960 cri.go:89] found id: ""
	I1126 20:09:25.145857   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:25.145956   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.154268   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.159267   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:25.159348   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:25.185977   59960 cri.go:89] found id: ""
	I1126 20:09:25.186002   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.186011   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:25.186017   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:25.186072   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:25.213727   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:25.213751   59960 cri.go:89] found id: ""
	I1126 20:09:25.213760   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:25.213826   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.217850   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:25.217960   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:25.246743   59960 cri.go:89] found id: ""
	I1126 20:09:25.246769   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.246779   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:25.246788   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:25.246800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:25.321227   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:25.312798    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.313456    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315126    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315598    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.317138    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:25.312798    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.313456    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315126    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315598    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.317138    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:25.321251   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:25.321288   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:25.346983   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:25.347011   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:25.407991   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:25.408027   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:25.439857   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:25.439886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:25.467227   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:25.467252   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:25.549334   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:25.549371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:25.590791   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:25.590821   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:25.636096   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:25.636130   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:25.668287   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:25.668314   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:25.765804   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:25.765838   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:28.279160   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:28.290077   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:28.290149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:28.320697   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:28.320720   59960 cri.go:89] found id: ""
	I1126 20:09:28.320729   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:28.320786   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.324391   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:28.324466   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:28.351072   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:28.351094   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:28.351099   59960 cri.go:89] found id: ""
	I1126 20:09:28.351106   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:28.351161   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.355739   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.359260   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:28.359346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:28.386343   59960 cri.go:89] found id: ""
	I1126 20:09:28.386370   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.386383   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:28.386390   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:28.386457   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:28.413613   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:28.413635   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:28.413641   59960 cri.go:89] found id: ""
	I1126 20:09:28.413648   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:28.413701   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.417403   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.420731   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:28.420810   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:28.446127   59960 cri.go:89] found id: ""
	I1126 20:09:28.446202   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.446225   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:28.446245   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:28.446337   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:28.471432   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:28.471454   59960 cri.go:89] found id: ""
	I1126 20:09:28.471462   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:28.471545   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.475058   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:28.475141   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:28.502515   59960 cri.go:89] found id: ""
	I1126 20:09:28.502539   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.502549   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:28.502559   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:28.502570   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:28.514608   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:28.514637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:28.557861   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:28.557890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:28.627880   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:28.627917   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:28.659730   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:28.659757   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:28.725495   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:28.717349    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.718072    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.719611    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.720154    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.722097    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:28.717349    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.718072    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.719611    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.720154    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.722097    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:28.725519   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:28.725532   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:28.763157   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:28.763187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:28.828543   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:28.828573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:28.855674   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:28.855707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:28.888296   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:28.888323   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:28.966101   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:28.966135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:31.560965   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:31.571673   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:31.571744   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:31.601161   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:31.601182   59960 cri.go:89] found id: ""
	I1126 20:09:31.601190   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:31.601269   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.605397   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:31.605476   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:31.631813   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:31.631835   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:31.631841   59960 cri.go:89] found id: ""
	I1126 20:09:31.631848   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:31.631904   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.635710   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.639546   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:31.639621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:31.674540   59960 cri.go:89] found id: ""
	I1126 20:09:31.674569   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.674578   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:31.674585   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:31.674643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:31.705780   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:31.705799   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:31.705803   59960 cri.go:89] found id: ""
	I1126 20:09:31.705810   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:31.705865   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.709862   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.713500   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:31.713591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:31.739394   59960 cri.go:89] found id: ""
	I1126 20:09:31.739419   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.739429   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:31.739435   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:31.739492   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:31.765811   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:31.765834   59960 cri.go:89] found id: ""
	I1126 20:09:31.765842   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:31.765960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.769463   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:31.769554   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:31.802081   59960 cri.go:89] found id: ""
	I1126 20:09:31.802107   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.802116   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:31.802153   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:31.802172   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:31.849273   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:31.849308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:31.902662   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:31.902697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:31.990675   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:31.990710   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:32.022637   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:32.022667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:32.100797   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:32.092180    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.093036    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.094703    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.095415    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.097142    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:32.092180    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.093036    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.094703    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.095415    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.097142    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:32.100820   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:32.100833   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:32.146149   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:32.146184   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:32.172943   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:32.172970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:32.199037   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:32.199063   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:32.306507   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:32.306540   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:32.319193   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:32.319221   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:34.849302   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:34.860158   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:34.860250   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:34.887094   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:34.887113   59960 cri.go:89] found id: ""
	I1126 20:09:34.887121   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:34.887177   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.890890   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:34.890964   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:34.921149   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:34.921177   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:34.921182   59960 cri.go:89] found id: ""
	I1126 20:09:34.921189   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:34.921243   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.924938   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.928493   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:34.928569   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:34.954052   59960 cri.go:89] found id: ""
	I1126 20:09:34.954078   59960 logs.go:282] 0 containers: []
	W1126 20:09:34.954087   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:34.954093   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:34.954206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:34.985031   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:34.985054   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:34.985059   59960 cri.go:89] found id: ""
	I1126 20:09:34.985067   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:34.985121   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.989050   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.992852   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:34.992934   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:35.019287   59960 cri.go:89] found id: ""
	I1126 20:09:35.019314   59960 logs.go:282] 0 containers: []
	W1126 20:09:35.019323   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:35.019330   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:35.019393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:35.049190   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:35.049217   59960 cri.go:89] found id: ""
	I1126 20:09:35.049237   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:35.049313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:35.053627   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:35.053713   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:35.091326   59960 cri.go:89] found id: ""
	I1126 20:09:35.091394   59960 logs.go:282] 0 containers: []
	W1126 20:09:35.091420   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:35.091440   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:35.091476   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:35.188523   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:35.188560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:35.220725   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:35.220755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:35.250614   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:35.250643   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:35.289963   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:35.289995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:35.303153   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:35.303180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:35.375929   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:35.367382    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.368117    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.369869    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.370618    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.372228    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:35.367382    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.368117    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.369869    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.370618    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.372228    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:35.375952   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:35.375968   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:35.403037   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:35.403066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:35.445367   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:35.445402   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:35.491101   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:35.491135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:35.561489   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:35.561524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:38.150634   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:38.161275   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:38.161346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:38.189434   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:38.189461   59960 cri.go:89] found id: ""
	I1126 20:09:38.189469   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:38.189530   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.195206   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:38.195288   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:38.223137   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:38.223160   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:38.223166   59960 cri.go:89] found id: ""
	I1126 20:09:38.223173   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:38.223227   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.226977   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.230547   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:38.230624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:38.255698   59960 cri.go:89] found id: ""
	I1126 20:09:38.255723   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.255732   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:38.255742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:38.255800   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:38.285059   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:38.285082   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:38.285087   59960 cri.go:89] found id: ""
	I1126 20:09:38.285097   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:38.285151   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.288799   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.292713   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:38.292786   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:38.318862   59960 cri.go:89] found id: ""
	I1126 20:09:38.318889   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.318898   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:38.318905   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:38.318963   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:38.346973   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:38.346996   59960 cri.go:89] found id: ""
	I1126 20:09:38.347005   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:38.347057   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.350729   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:38.350856   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:38.378801   59960 cri.go:89] found id: ""
	I1126 20:09:38.378827   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.378836   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:38.378845   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:38.378915   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:38.390980   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:38.391009   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:38.422522   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:38.422550   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:38.469058   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:38.469133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:38.523109   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:38.523182   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:38.559691   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:38.559716   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:38.646468   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:38.646504   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:38.751509   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:38.751551   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:38.836492   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:38.827693    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.828759    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.829560    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.830636    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.831318    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:38.827693    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.828759    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.829560    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.830636    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.831318    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:38.836516   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:38.836528   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:38.876587   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:38.876623   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:38.910948   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:38.910987   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:41.443533   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:41.454798   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:41.454873   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:41.485670   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:41.485699   59960 cri.go:89] found id: ""
	I1126 20:09:41.485707   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:41.485761   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.489619   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:41.489690   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:41.525686   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:41.525710   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:41.525714   59960 cri.go:89] found id: ""
	I1126 20:09:41.525722   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:41.525777   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.536491   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.541670   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:41.541797   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:41.570295   59960 cri.go:89] found id: ""
	I1126 20:09:41.570319   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.570327   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:41.570334   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:41.570393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:41.598145   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:41.598169   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:41.598175   59960 cri.go:89] found id: ""
	I1126 20:09:41.598182   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:41.598258   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.602230   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.606445   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:41.606530   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:41.636614   59960 cri.go:89] found id: ""
	I1126 20:09:41.636637   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.636646   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:41.636652   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:41.636707   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:41.663292   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:41.663315   59960 cri.go:89] found id: ""
	I1126 20:09:41.663327   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:41.663382   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.667194   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:41.667277   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:41.696056   59960 cri.go:89] found id: ""
	I1126 20:09:41.696081   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.696090   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:41.696099   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:41.696110   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:41.794427   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:41.794463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:41.822463   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:41.822493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:41.871566   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:41.871599   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:41.916725   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:41.916759   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:41.950381   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:41.950410   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:41.982658   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:41.982692   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:41.996639   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:41.996672   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:42.087350   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:42.079184    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.079744    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081320    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081972    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.083647    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:42.079184    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.079744    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081320    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081972    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.083647    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:42.087369   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:42.087384   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:42.175919   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:42.176012   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:42.281379   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:42.281406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:44.882212   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:44.893873   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:44.893969   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:44.923663   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:44.923683   59960 cri.go:89] found id: ""
	I1126 20:09:44.923691   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:44.923744   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.927892   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:44.927959   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:44.958403   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:44.958423   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:44.958427   59960 cri.go:89] found id: ""
	I1126 20:09:44.958434   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:44.958486   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.962367   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.966913   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:44.966985   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:45.000482   59960 cri.go:89] found id: ""
	I1126 20:09:45.000503   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.000511   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:45.000517   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:45.000572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:45.031381   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:45.031401   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:45.031406   59960 cri.go:89] found id: ""
	I1126 20:09:45.031414   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:45.031471   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.036637   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.042551   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:45.042723   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:45.086906   59960 cri.go:89] found id: ""
	I1126 20:09:45.086987   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.087026   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:45.087050   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:45.087153   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:45.137504   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:45.137578   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:45.137598   59960 cri.go:89] found id: ""
	I1126 20:09:45.137621   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:45.137715   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.143678   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.149235   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:45.149438   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:45.196979   59960 cri.go:89] found id: ""
	I1126 20:09:45.197063   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.197089   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:45.197146   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:45.197191   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:45.267194   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:45.267280   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:45.386434   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:45.386524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:45.468233   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:45.459943    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.460742    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462336    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462624    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.464644    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:45.459943    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.460742    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462336    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462624    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.464644    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:45.468305   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:45.468342   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:45.541622   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:45.541649   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:45.613664   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:45.613695   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:45.641765   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:45.641794   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:45.702809   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:45.702837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:45.807019   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:45.807056   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:45.820258   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:45.820289   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:45.867345   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:45.867376   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:45.921560   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:45.921596   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:48.454091   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:48.464670   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:48.464755   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:48.493056   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:48.493081   59960 cri.go:89] found id: ""
	I1126 20:09:48.493089   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:48.493144   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.496943   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:48.497007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:48.524995   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:48.525020   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:48.525025   59960 cri.go:89] found id: ""
	I1126 20:09:48.525032   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:48.525085   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.528726   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.532247   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:48.532317   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:48.557862   59960 cri.go:89] found id: ""
	I1126 20:09:48.557887   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.557896   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:48.557902   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:48.557988   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:48.587744   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:48.587765   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:48.587770   59960 cri.go:89] found id: ""
	I1126 20:09:48.587777   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:48.587832   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.591388   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.594875   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:48.594985   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:48.627277   59960 cri.go:89] found id: ""
	I1126 20:09:48.627298   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.627313   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:48.627352   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:48.627433   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:48.664063   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:48.664088   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:48.664102   59960 cri.go:89] found id: ""
	I1126 20:09:48.664110   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:48.664222   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.668219   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.671608   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:48.671680   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:48.700294   59960 cri.go:89] found id: ""
	I1126 20:09:48.700322   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.700331   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:48.700340   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:48.700351   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:48.793887   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:48.793974   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:48.807445   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:48.807472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:48.881133   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:48.873596    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.874156    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.875737    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.876232    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.877299    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:48.873596    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.874156    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.875737    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.876232    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.877299    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:48.881155   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:48.881167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:48.926338   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:48.926370   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:48.980929   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:48.980964   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:49.008703   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:49.008729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:49.035020   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:49.035134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:49.075209   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:49.075239   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:49.102778   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:49.102808   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:49.148209   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:49.148243   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:49.175449   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:49.175477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:51.750461   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:51.761173   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:51.761247   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:51.792174   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:51.792200   59960 cri.go:89] found id: ""
	I1126 20:09:51.792207   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:51.792272   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.796194   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:51.796266   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:51.826309   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:51.826333   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:51.826339   59960 cri.go:89] found id: ""
	I1126 20:09:51.826346   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:51.826408   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.830049   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.833626   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:51.833703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:51.864668   59960 cri.go:89] found id: ""
	I1126 20:09:51.864693   59960 logs.go:282] 0 containers: []
	W1126 20:09:51.864702   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:51.864709   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:51.864770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:51.902154   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:51.902178   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:51.902184   59960 cri.go:89] found id: ""
	I1126 20:09:51.902191   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:51.902244   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.906099   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.909550   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:51.909622   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:51.940956   59960 cri.go:89] found id: ""
	I1126 20:09:51.940984   59960 logs.go:282] 0 containers: []
	W1126 20:09:51.940993   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:51.941000   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:51.941057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:51.967086   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:51.967112   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:51.967117   59960 cri.go:89] found id: ""
	I1126 20:09:51.967125   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:51.967206   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.970992   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.974344   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:51.974463   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:52.006654   59960 cri.go:89] found id: ""
	I1126 20:09:52.006675   59960 logs.go:282] 0 containers: []
	W1126 20:09:52.006684   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:52.006693   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:52.006705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:52.033587   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:52.033621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:52.062777   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:52.062810   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:52.136250   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:52.127112    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.127989    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.129548    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.130437    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.132317    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:52.127112    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.127989    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.129548    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.130437    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.132317    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:52.136279   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:52.136292   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:52.165716   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:52.165792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:52.210120   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:52.210157   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:52.266182   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:52.266228   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:52.296704   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:52.296732   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:52.373394   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:52.373432   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:52.409405   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:52.409436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:52.508717   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:52.508755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:52.520510   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:52.520577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.069988   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:55.081385   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:55.081477   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:55.109272   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:55.109297   59960 cri.go:89] found id: ""
	I1126 20:09:55.109306   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:55.109393   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.113332   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:55.113409   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:55.144644   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.144728   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:55.144749   59960 cri.go:89] found id: ""
	I1126 20:09:55.144782   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:55.144860   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.148962   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.153598   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:55.153724   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:55.180168   59960 cri.go:89] found id: ""
	I1126 20:09:55.180235   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.180274   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:55.180302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:55.180378   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:55.207578   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:55.207606   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:55.207611   59960 cri.go:89] found id: ""
	I1126 20:09:55.207621   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:55.207698   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.211665   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.215295   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:55.215371   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:55.243201   59960 cri.go:89] found id: ""
	I1126 20:09:55.243228   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.243237   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:55.243243   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:55.243299   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:55.273345   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:55.273370   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:55.273375   59960 cri.go:89] found id: ""
	I1126 20:09:55.273382   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:55.273434   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.277156   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.280557   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:55.280629   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:55.306973   59960 cri.go:89] found id: ""
	I1126 20:09:55.307037   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.307052   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:55.307061   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:55.307072   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:55.405440   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:55.405474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:55.418598   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:55.418628   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:55.487261   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:55.479261    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.479915    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481393    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481846    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.483618    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:55.479261    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.479915    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481393    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481846    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.483618    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:55.487286   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:55.487299   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:55.531555   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:55.531626   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:55.601020   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:55.601057   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:55.632319   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:55.632347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:55.660851   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:55.660881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:55.742963   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:55.742998   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:55.773047   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:55.773076   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.826960   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:55.826991   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:55.855917   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:55.855944   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:58.399772   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:58.415975   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:58.416043   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:58.442760   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:58.442782   59960 cri.go:89] found id: ""
	I1126 20:09:58.442792   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:58.442850   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.446527   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:58.446620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:58.476049   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:58.476071   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:58.476076   59960 cri.go:89] found id: ""
	I1126 20:09:58.476084   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:58.476141   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.480019   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.483716   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:58.483799   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:58.514116   59960 cri.go:89] found id: ""
	I1126 20:09:58.514138   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.514147   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:58.514153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:58.514220   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:58.547211   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:58.547233   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:58.547239   59960 cri.go:89] found id: ""
	I1126 20:09:58.547257   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:58.547342   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.551299   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.554848   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:58.554921   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:58.583768   59960 cri.go:89] found id: ""
	I1126 20:09:58.583793   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.583802   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:58.583809   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:58.583865   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:58.611601   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:58.611635   59960 cri.go:89] found id: ""
	I1126 20:09:58.611644   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:09:58.611703   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.615732   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:58.615802   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:58.646048   59960 cri.go:89] found id: ""
	I1126 20:09:58.646087   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.646096   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:58.646106   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:58.646135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:58.745296   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:58.745332   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:58.820265   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:58.811642    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.812262    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.813785    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.814448    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.815924    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:58.811642    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.812262    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.813785    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.814448    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.815924    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:58.820294   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:58.820308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:58.877523   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:58.877556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:58.904630   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:58.904656   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:58.980105   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:58.980138   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:58.992220   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:58.992248   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:59.019086   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:59.019112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:59.058229   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:59.058260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:59.106394   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:59.106427   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:59.134445   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:59.134474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:01.667677   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:01.679153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:01.679227   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:01.713101   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:01.713122   59960 cri.go:89] found id: ""
	I1126 20:10:01.713130   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:01.713185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.717042   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:01.717117   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:01.748792   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:01.748817   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:01.748823   59960 cri.go:89] found id: ""
	I1126 20:10:01.748832   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:01.748889   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.752752   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.756411   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:01.756487   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:01.785898   59960 cri.go:89] found id: ""
	I1126 20:10:01.785954   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.785964   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:01.785971   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:01.786033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:01.817470   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:01.817496   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:01.817502   59960 cri.go:89] found id: ""
	I1126 20:10:01.817509   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:01.817567   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.821688   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.826052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:01.826203   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:01.856542   59960 cri.go:89] found id: ""
	I1126 20:10:01.856568   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.856590   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:01.856620   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:01.856742   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:01.893138   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:01.893218   59960 cri.go:89] found id: ""
	I1126 20:10:01.893242   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:01.893337   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.897863   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:01.898026   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:01.935921   59960 cri.go:89] found id: ""
	I1126 20:10:01.935951   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.935961   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:01.935971   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:01.935985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:01.973303   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:01.973332   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:02.028454   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:02.028493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:02.074241   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:02.074272   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:02.162898   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:02.162936   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:02.176057   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:02.176088   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:02.235629   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:02.235665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:02.306607   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:02.306643   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:02.337699   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:02.337729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:02.374553   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:02.374582   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:02.481202   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:02.481238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:02.563313   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:02.555444    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.556211    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.557668    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.558242    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.559786    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:02.555444    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.556211    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.557668    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.558242    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.559786    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:05.064305   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:05.075852   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:05.075925   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:05.108322   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:05.108345   59960 cri.go:89] found id: ""
	I1126 20:10:05.108354   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:05.108410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.112382   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:05.112460   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:05.140946   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:05.141021   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:05.141040   59960 cri.go:89] found id: ""
	I1126 20:10:05.141063   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:05.141150   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.145278   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.148898   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:05.148974   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:05.176423   59960 cri.go:89] found id: ""
	I1126 20:10:05.176450   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.176459   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:05.176466   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:05.176527   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:05.204990   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:05.205013   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:05.205018   59960 cri.go:89] found id: ""
	I1126 20:10:05.205026   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:05.205088   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.208959   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.212627   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:05.212730   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:05.239581   59960 cri.go:89] found id: ""
	I1126 20:10:05.239604   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.239614   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:05.239620   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:05.239679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:05.268087   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:05.268110   59960 cri.go:89] found id: ""
	I1126 20:10:05.268119   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:05.268176   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.271819   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:05.271923   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:05.298753   59960 cri.go:89] found id: ""
	I1126 20:10:05.298819   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.298833   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:05.298843   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:05.298855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:05.325518   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:05.325548   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:05.376406   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:05.376438   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:05.428781   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:05.428943   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:05.459754   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:05.459786   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:05.487550   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:05.487581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:05.520035   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:05.520071   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:05.616425   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:05.616503   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:05.630189   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:05.630221   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:05.715272   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:05.705315    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.706188    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708012    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708749    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.710497    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:05.705315    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.706188    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708012    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708749    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.710497    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:05.715301   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:05.715315   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:05.768473   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:05.768507   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:08.349688   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:08.360619   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:08.360693   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:08.388583   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:08.388610   59960 cri.go:89] found id: ""
	I1126 20:10:08.388619   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:08.388678   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.392264   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:08.392334   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:08.418523   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:08.418549   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:08.418554   59960 cri.go:89] found id: ""
	I1126 20:10:08.418562   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:08.418621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.422368   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.425851   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:08.425954   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:08.456520   59960 cri.go:89] found id: ""
	I1126 20:10:08.456546   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.456555   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:08.456562   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:08.456620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:08.487158   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:08.487182   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:08.487186   59960 cri.go:89] found id: ""
	I1126 20:10:08.487195   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:08.487268   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.491193   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.494690   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:08.494760   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:08.523674   59960 cri.go:89] found id: ""
	I1126 20:10:08.523699   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.523708   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:08.523715   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:08.523773   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:08.569422   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:08.569442   59960 cri.go:89] found id: ""
	I1126 20:10:08.569449   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:08.569505   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.572997   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:08.573065   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:08.599736   59960 cri.go:89] found id: ""
	I1126 20:10:08.599763   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.599772   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:08.599781   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:08.599799   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:08.674461   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:08.665974    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.666705    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.668447    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.669108    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.670766    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:08.665974    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.666705    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.668447    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.669108    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.670766    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:08.674482   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:08.674495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:08.726546   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:08.726591   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:08.783639   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:08.783690   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:08.860709   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:08.860759   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:08.873030   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:08.873058   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:08.899170   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:08.899199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:08.940773   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:08.940855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:08.969671   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:08.969762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:09.001544   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:09.001621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:09.035799   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:09.035837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:11.634159   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:11.645145   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:11.645262   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:11.684091   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:11.684113   59960 cri.go:89] found id: ""
	I1126 20:10:11.684121   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:11.684198   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.687930   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:11.688002   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:11.716342   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:11.716366   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:11.716372   59960 cri.go:89] found id: ""
	I1126 20:10:11.716380   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:11.716438   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.720592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.724106   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:11.724181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:11.750971   59960 cri.go:89] found id: ""
	I1126 20:10:11.750997   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.751007   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:11.751014   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:11.751140   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:11.778888   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:11.778912   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:11.778917   59960 cri.go:89] found id: ""
	I1126 20:10:11.778924   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:11.778979   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.782704   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.786153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:11.786245   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:11.812859   59960 cri.go:89] found id: ""
	I1126 20:10:11.812924   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.812953   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:11.812972   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:11.813047   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:11.844995   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:11.845065   59960 cri.go:89] found id: ""
	I1126 20:10:11.845089   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:11.845159   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.848928   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:11.849056   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:11.878557   59960 cri.go:89] found id: ""
	I1126 20:10:11.878634   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.878657   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:11.878674   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:11.878686   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:11.911996   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:11.912024   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:11.957531   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:11.957700   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:12.002561   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:12.002600   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:12.037611   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:12.037655   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:12.124659   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:12.124695   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:12.157527   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:12.157559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:12.255561   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:12.255597   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:12.270701   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:12.270727   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:12.344084   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:12.335378    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.336132    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.337729    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.338527    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.340203    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:12.335378    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.336132    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.337729    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.338527    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.340203    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:12.344111   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:12.344127   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:12.414064   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:12.414099   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:14.957062   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:14.971279   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:14.971358   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:15.002850   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:15.002871   59960 cri.go:89] found id: ""
	I1126 20:10:15.002879   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:15.002953   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.007210   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:15.007317   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:15.044904   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:15.044929   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:15.044934   59960 cri.go:89] found id: ""
	I1126 20:10:15.044943   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:15.045037   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.050180   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.055192   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:15.055293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:15.087772   59960 cri.go:89] found id: ""
	I1126 20:10:15.087798   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.087815   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:15.087822   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:15.087883   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:15.117095   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:15.117114   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:15.117119   59960 cri.go:89] found id: ""
	I1126 20:10:15.117127   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:15.117185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.120995   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.124760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:15.124885   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:15.157854   59960 cri.go:89] found id: ""
	I1126 20:10:15.157954   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.157994   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:15.158017   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:15.158084   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:15.190383   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:15.190407   59960 cri.go:89] found id: ""
	I1126 20:10:15.190417   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:15.190474   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.194524   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:15.194624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:15.223311   59960 cri.go:89] found id: ""
	I1126 20:10:15.223337   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.223346   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:15.223355   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:15.223366   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:15.236105   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:15.236134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:15.263408   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:15.263436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:15.308099   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:15.308133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:15.370222   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:15.370258   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:15.412978   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:15.413009   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:15.482330   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:15.473679    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.474420    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476124    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476749    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.478398    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:15.473679    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.474420    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476124    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476749    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.478398    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:15.482403   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:15.482428   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:15.528305   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:15.528335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:15.564111   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:15.564138   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:15.592541   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:15.592569   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:15.673319   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:15.673357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:18.279646   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:18.290358   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:18.290427   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:18.319136   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:18.319159   59960 cri.go:89] found id: ""
	I1126 20:10:18.319168   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:18.319225   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.322893   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:18.322967   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:18.350092   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:18.350120   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:18.350126   59960 cri.go:89] found id: ""
	I1126 20:10:18.350139   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:18.350193   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.354777   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.358503   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:18.358602   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:18.396162   59960 cri.go:89] found id: ""
	I1126 20:10:18.396185   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.396193   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:18.396199   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:18.396262   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:18.430093   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:18.430119   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:18.430124   59960 cri.go:89] found id: ""
	I1126 20:10:18.430131   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:18.430196   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.434456   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.438374   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:18.438451   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:18.478030   59960 cri.go:89] found id: ""
	I1126 20:10:18.478058   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.478070   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:18.478076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:18.478137   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:18.506317   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:18.506340   59960 cri.go:89] found id: ""
	I1126 20:10:18.506349   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:18.506410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.510476   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:18.510552   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:18.550337   59960 cri.go:89] found id: ""
	I1126 20:10:18.550408   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.550436   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:18.550454   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:18.550487   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:18.621602   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:18.613602    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.614230    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.615899    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.616339    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.617881    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:18.613602    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.614230    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.615899    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.616339    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.617881    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:18.621625   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:18.621638   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:18.648795   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:18.648824   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:18.691314   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:18.691358   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:18.771327   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:18.771367   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:18.808287   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:18.808319   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:18.907011   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:18.907048   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:18.919575   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:18.919605   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:18.961664   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:18.961697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:19.020056   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:19.020092   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:19.050179   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:19.050206   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:21.599106   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:21.611209   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:21.611309   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:21.639207   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:21.639229   59960 cri.go:89] found id: ""
	I1126 20:10:21.639238   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:21.639296   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.643290   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:21.643365   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:21.675608   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:21.675633   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:21.675639   59960 cri.go:89] found id: ""
	I1126 20:10:21.675648   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:21.675702   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.679772   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.683385   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:21.683511   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:21.719004   59960 cri.go:89] found id: ""
	I1126 20:10:21.719078   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.719102   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:21.719123   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:21.719196   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:21.745555   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:21.745634   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:21.745660   59960 cri.go:89] found id: ""
	I1126 20:10:21.745681   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:21.745750   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.750313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.753830   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:21.753907   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:21.781119   59960 cri.go:89] found id: ""
	I1126 20:10:21.781199   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.781222   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:21.781243   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:21.781347   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:21.809894   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:21.810006   59960 cri.go:89] found id: ""
	I1126 20:10:21.810022   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:21.810092   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.813756   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:21.813853   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:21.840725   59960 cri.go:89] found id: ""
	I1126 20:10:21.840751   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.840760   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:21.840769   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:21.840781   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:21.854145   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:21.854177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:21.884873   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:21.884902   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:21.936427   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:21.936463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:21.990170   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:21.990205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:22.077016   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:22.077064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:22.106941   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:22.106974   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:22.136672   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:22.136703   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:22.235594   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:22.235630   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:22.305008   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:22.295860    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.296666    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.298548    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.299084    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.300765    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:22.295860    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.296666    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.298548    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.299084    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.300765    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:22.305032   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:22.305046   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:22.378673   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:22.378711   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:24.920612   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:24.931941   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:24.932015   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:24.958956   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:24.958979   59960 cri.go:89] found id: ""
	I1126 20:10:24.958988   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:24.959047   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.962853   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:24.962931   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:24.989108   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:24.989130   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:24.989134   59960 cri.go:89] found id: ""
	I1126 20:10:24.989141   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:24.989195   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.992756   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.996360   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:24.996431   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:25.023636   59960 cri.go:89] found id: ""
	I1126 20:10:25.023660   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.023670   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:25.023676   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:25.023751   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:25.056300   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:25.056325   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:25.056331   59960 cri.go:89] found id: ""
	I1126 20:10:25.056339   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:25.056407   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.060822   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.066693   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:25.066825   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:25.098171   59960 cri.go:89] found id: ""
	I1126 20:10:25.098239   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.098258   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:25.098265   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:25.098344   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:25.129634   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:25.129655   59960 cri.go:89] found id: ""
	I1126 20:10:25.129664   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:25.129759   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.134599   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:25.134715   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:25.166870   59960 cri.go:89] found id: ""
	I1126 20:10:25.166896   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.166905   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:25.166918   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:25.166931   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:25.201303   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:25.201335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:25.234106   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:25.234132   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:25.335293   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:25.335329   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:25.367895   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:25.367920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:25.408499   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:25.408540   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:25.489459   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:25.489496   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:25.525614   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:25.525642   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:25.540937   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:25.541079   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:25.619457   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:25.611129    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.611986    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.613567    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.614319    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.615842    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:25.611129    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.611986    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.613567    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.614319    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.615842    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:25.619480   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:25.619494   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:25.667380   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:25.667419   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.233076   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:28.244698   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:28.244770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:28.272507   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:28.272530   59960 cri.go:89] found id: ""
	I1126 20:10:28.272538   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:28.272596   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.276257   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:28.276333   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:28.303315   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:28.303337   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:28.303342   59960 cri.go:89] found id: ""
	I1126 20:10:28.303349   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:28.303429   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.307300   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.310655   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:28.310727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:28.337118   59960 cri.go:89] found id: ""
	I1126 20:10:28.337140   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.337150   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:28.337156   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:28.337214   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:28.364328   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.364352   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:28.364358   59960 cri.go:89] found id: ""
	I1126 20:10:28.364374   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:28.364436   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.368741   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.372299   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:28.372385   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:28.398315   59960 cri.go:89] found id: ""
	I1126 20:10:28.398342   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.398351   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:28.398357   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:28.398418   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:28.426255   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:28.426276   59960 cri.go:89] found id: ""
	I1126 20:10:28.426287   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:28.426342   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.429863   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:28.430017   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:28.456908   59960 cri.go:89] found id: ""
	I1126 20:10:28.456933   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.456942   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:28.456951   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:28.456962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:28.532783   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:28.532820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:28.637119   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:28.637160   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:28.711269   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:28.702783    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.703978    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.704633    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706176    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706692    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:28.702783    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.703978    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.704633    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706176    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706692    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:28.711288   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:28.711304   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:28.737855   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:28.737883   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:28.789442   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:28.789477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:28.820705   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:28.820738   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:28.855530   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:28.855560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:28.868297   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:28.868324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:28.913639   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:28.913673   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.973350   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:28.973386   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:31.500924   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:31.511869   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:31.511943   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:31.546414   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:31.546447   59960 cri.go:89] found id: ""
	I1126 20:10:31.546456   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:31.546559   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.550296   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:31.550368   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:31.577840   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:31.577859   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:31.577864   59960 cri.go:89] found id: ""
	I1126 20:10:31.577870   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:31.577967   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.581789   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.585352   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:31.585421   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:31.616396   59960 cri.go:89] found id: ""
	I1126 20:10:31.616419   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.616428   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:31.616435   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:31.616491   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:31.641907   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:31.641971   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:31.641977   59960 cri.go:89] found id: ""
	I1126 20:10:31.641984   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:31.642048   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.645886   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.649651   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:31.649732   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:31.682488   59960 cri.go:89] found id: ""
	I1126 20:10:31.682512   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.682521   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:31.682527   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:31.682597   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:31.713608   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:31.713632   59960 cri.go:89] found id: ""
	I1126 20:10:31.713641   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:31.713693   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.717274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:31.717349   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:31.750907   59960 cri.go:89] found id: ""
	I1126 20:10:31.750934   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.750948   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:31.750957   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:31.750970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:31.822403   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:31.813458    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.814237    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.815876    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.816493    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.818239    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:31.813458    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.814237    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.815876    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.816493    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.818239    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:31.822425   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:31.822440   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:31.849676   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:31.849705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:31.891923   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:31.891959   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:31.944564   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:31.944608   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:32.015493   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:32.015577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:32.047447   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:32.047480   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:32.127183   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:32.127225   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:32.229734   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:32.229767   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:32.243678   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:32.243719   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:32.271264   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:32.271291   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:34.809253   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:34.819692   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:34.819817   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:34.846220   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:34.846240   59960 cri.go:89] found id: ""
	I1126 20:10:34.846248   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:34.846302   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.849960   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:34.850035   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:34.875486   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:34.875510   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:34.875515   59960 cri.go:89] found id: ""
	I1126 20:10:34.875522   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:34.875591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.879655   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.883266   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:34.883341   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:34.910257   59960 cri.go:89] found id: ""
	I1126 20:10:34.910286   59960 logs.go:282] 0 containers: []
	W1126 20:10:34.910295   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:34.910302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:34.910359   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:34.936501   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:34.936526   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:34.936531   59960 cri.go:89] found id: ""
	I1126 20:10:34.936539   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:34.936602   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.940297   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.943886   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:34.943960   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:34.970440   59960 cri.go:89] found id: ""
	I1126 20:10:34.970467   59960 logs.go:282] 0 containers: []
	W1126 20:10:34.970476   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:34.970482   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:34.970540   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:34.996813   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:34.996833   59960 cri.go:89] found id: ""
	I1126 20:10:34.996842   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:34.996901   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:35.000962   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:35.001030   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:35.029207   59960 cri.go:89] found id: ""
	I1126 20:10:35.029229   59960 logs.go:282] 0 containers: []
	W1126 20:10:35.029237   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:35.029247   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:35.029259   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:35.089280   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:35.089316   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:35.137518   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:35.137557   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:35.198701   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:35.198741   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:35.226526   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:35.226560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:35.308302   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:35.308341   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:35.411713   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:35.411751   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:35.425089   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:35.425118   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:35.496500   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:35.487044    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.487890    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.489861    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.490651    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.492443    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:35.487044    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.487890    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.489861    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.490651    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.492443    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:35.496523   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:35.496538   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:35.521713   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:35.521740   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:35.552491   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:35.552520   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:38.092147   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:38.105386   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:38.105494   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:38.134115   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:38.134183   59960 cri.go:89] found id: ""
	I1126 20:10:38.134204   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:38.134297   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.138342   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:38.138463   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:38.165373   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:38.165448   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:38.165468   59960 cri.go:89] found id: ""
	I1126 20:10:38.165492   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:38.165591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.169464   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.173100   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:38.173220   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:38.201795   59960 cri.go:89] found id: ""
	I1126 20:10:38.201818   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.201826   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:38.201836   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:38.201895   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:38.234752   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:38.234776   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:38.234782   59960 cri.go:89] found id: ""
	I1126 20:10:38.234789   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:38.234845   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.239023   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.242779   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:38.242854   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:38.271155   59960 cri.go:89] found id: ""
	I1126 20:10:38.271184   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.271193   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:38.271200   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:38.271261   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:38.298657   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:38.298682   59960 cri.go:89] found id: ""
	I1126 20:10:38.298691   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:38.298766   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.302858   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:38.302929   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:38.330494   59960 cri.go:89] found id: ""
	I1126 20:10:38.330520   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.330529   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:38.330538   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:38.330570   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:38.356340   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:38.356374   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:38.401509   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:38.401542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:38.463681   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:38.463719   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:38.496848   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:38.496881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:38.524848   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:38.524875   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:38.607033   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:38.607098   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:38.709803   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:38.709840   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:38.722963   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:38.722995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:38.796592   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:38.787909    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.788704    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.790425    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.791012    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.792912    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:38.787909    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.788704    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.790425    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.791012    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.792912    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:38.796617   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:38.796635   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:38.836671   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:38.836707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:41.373598   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:41.384711   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:41.384792   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:41.414012   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:41.414038   59960 cri.go:89] found id: ""
	I1126 20:10:41.414047   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:41.414103   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.417961   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:41.418036   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:41.450051   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:41.450076   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:41.450082   59960 cri.go:89] found id: ""
	I1126 20:10:41.450089   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:41.450147   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.455240   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.459174   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:41.459275   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:41.487216   59960 cri.go:89] found id: ""
	I1126 20:10:41.487241   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.487250   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:41.487257   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:41.487340   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:41.515666   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:41.515739   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:41.515751   59960 cri.go:89] found id: ""
	I1126 20:10:41.515759   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:41.515817   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.519735   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.523565   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:41.523639   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:41.554213   59960 cri.go:89] found id: ""
	I1126 20:10:41.554240   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.554250   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:41.554256   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:41.554321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:41.584766   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:41.584790   59960 cri.go:89] found id: ""
	I1126 20:10:41.584799   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:41.584861   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.589437   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:41.589510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:41.616610   59960 cri.go:89] found id: ""
	I1126 20:10:41.616638   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.616648   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:41.616657   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:41.616669   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:41.696316   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:41.696352   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:41.765798   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:41.758434    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.758824    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760333    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760643    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.762180    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:41.758434    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.758824    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760333    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760643    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.762180    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:41.765870   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:41.765900   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:41.791490   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:41.791517   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:41.827993   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:41.828022   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:41.854480   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:41.854511   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:41.885603   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:41.885632   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:41.984936   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:41.984970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:41.997672   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:41.997701   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:42.039613   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:42.039668   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:42.100317   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:42.100359   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:44.745690   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:44.756208   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:44.756277   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:44.793586   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:44.793606   59960 cri.go:89] found id: ""
	I1126 20:10:44.793614   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:44.793666   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.797466   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:44.797561   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:44.823288   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:44.823313   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:44.823319   59960 cri.go:89] found id: ""
	I1126 20:10:44.823326   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:44.823383   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.828270   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.832190   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:44.832260   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:44.858643   59960 cri.go:89] found id: ""
	I1126 20:10:44.858694   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.858704   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:44.858711   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:44.858772   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:44.887625   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:44.887711   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:44.887722   59960 cri.go:89] found id: ""
	I1126 20:10:44.887730   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:44.887791   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.891593   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.895076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:44.895151   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:44.924994   59960 cri.go:89] found id: ""
	I1126 20:10:44.925060   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.925085   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:44.925104   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:44.925196   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:44.951783   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:44.951807   59960 cri.go:89] found id: ""
	I1126 20:10:44.951816   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:44.951874   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.955505   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:44.955620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:44.982789   59960 cri.go:89] found id: ""
	I1126 20:10:44.982814   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.982822   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:44.982831   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:44.982843   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:45.010557   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:45.010586   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:45.141549   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:45.141632   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:45.253485   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:45.253554   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:45.353619   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:45.353660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:45.408761   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:45.408795   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:45.443664   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:45.443692   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:45.470742   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:45.470773   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:45.504515   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:45.504544   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:45.608220   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:45.608254   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:45.620732   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:45.620761   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:45.707896   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:45.695026    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.696388    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.697297    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.699791    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.700340    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:45.695026    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.696388    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.697297    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.699791    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.700340    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:48.209609   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:48.220742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:48.220811   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:48.247863   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:48.247886   59960 cri.go:89] found id: ""
	I1126 20:10:48.247894   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:48.247949   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.251929   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:48.251997   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:48.280449   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:48.280470   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:48.280475   59960 cri.go:89] found id: ""
	I1126 20:10:48.280483   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:48.280537   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.284732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.288315   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:48.288405   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:48.316409   59960 cri.go:89] found id: ""
	I1126 20:10:48.316432   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.316440   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:48.316446   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:48.316506   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:48.349208   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:48.349271   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:48.349289   59960 cri.go:89] found id: ""
	I1126 20:10:48.349316   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:48.349408   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.354353   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.357751   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:48.357848   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:48.385059   59960 cri.go:89] found id: ""
	I1126 20:10:48.385081   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.385090   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:48.385107   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:48.385185   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:48.411304   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:48.411326   59960 cri.go:89] found id: ""
	I1126 20:10:48.411334   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:48.411405   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.415053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:48.415156   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:48.441024   59960 cri.go:89] found id: ""
	I1126 20:10:48.441046   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.441055   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:48.441063   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:48.441075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:48.469644   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:48.469672   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:48.510776   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:48.510859   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:48.592885   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:48.592917   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:48.620191   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:48.620216   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:48.715671   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:48.715746   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:48.730976   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:48.731004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:48.784446   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:48.784483   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:48.816189   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:48.816220   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:48.894569   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:48.894607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:48.934181   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:48.934214   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:49.000322   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:48.992247    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.992990    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994167    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994648    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.996101    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:48.992247    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.992990    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994167    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994648    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.996101    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:51.500568   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:51.512500   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:51.512570   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:51.550166   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:51.550188   59960 cri.go:89] found id: ""
	I1126 20:10:51.550196   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:51.550253   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.554115   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:51.554221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:51.580857   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:51.580880   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:51.580885   59960 cri.go:89] found id: ""
	I1126 20:10:51.580893   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:51.580949   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.584903   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.588661   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:51.588730   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:51.620121   59960 cri.go:89] found id: ""
	I1126 20:10:51.620147   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.620156   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:51.620163   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:51.620225   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:51.648043   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:51.648066   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:51.648071   59960 cri.go:89] found id: ""
	I1126 20:10:51.648079   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:51.648144   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.652146   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.656590   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:51.656658   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:51.684798   59960 cri.go:89] found id: ""
	I1126 20:10:51.684825   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.684835   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:51.684842   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:51.684900   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:51.712247   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:51.712270   59960 cri.go:89] found id: ""
	I1126 20:10:51.712279   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:51.712334   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.716105   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:51.716235   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:51.755296   59960 cri.go:89] found id: ""
	I1126 20:10:51.755373   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.755389   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:51.755400   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:51.755412   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:51.782840   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:51.782871   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:51.826403   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:51.826436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:51.894112   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:51.894148   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:51.920185   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:51.920212   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:51.993815   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:51.993856   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:52.030774   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:52.030804   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:52.112821   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:52.103396    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.104540    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.105295    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.106939    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.107489    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:52.103396    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.104540    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.105295    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.106939    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.107489    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:52.112847   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:52.112861   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:52.161738   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:52.161771   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:52.193340   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:52.193368   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:52.291814   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:52.291862   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:54.810104   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:54.820898   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:54.820971   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:54.849431   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:54.849454   59960 cri.go:89] found id: ""
	I1126 20:10:54.849462   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:54.849524   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.853394   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:54.853465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:54.879833   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:54.879855   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:54.879860   59960 cri.go:89] found id: ""
	I1126 20:10:54.879867   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:54.879926   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.883636   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.887200   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:54.887280   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:54.913349   59960 cri.go:89] found id: ""
	I1126 20:10:54.913374   59960 logs.go:282] 0 containers: []
	W1126 20:10:54.913382   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:54.913389   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:54.913446   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:54.941189   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:54.941215   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:54.941221   59960 cri.go:89] found id: ""
	I1126 20:10:54.941229   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:54.941285   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.945133   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.948594   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:54.948673   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:54.977649   59960 cri.go:89] found id: ""
	I1126 20:10:54.977677   59960 logs.go:282] 0 containers: []
	W1126 20:10:54.977687   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:54.977693   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:54.977768   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:55.008912   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:55.008938   59960 cri.go:89] found id: ""
	I1126 20:10:55.008948   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:55.009005   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:55.012659   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:55.012727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:55.056313   59960 cri.go:89] found id: ""
	I1126 20:10:55.056393   59960 logs.go:282] 0 containers: []
	W1126 20:10:55.056419   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:55.056449   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:55.056478   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:55.170137   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:55.170180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:55.194458   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:55.194489   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:55.279906   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:55.272019    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.272480    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274150    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274543    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.276078    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:55.272019    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.272480    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274150    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274543    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.276078    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:55.279931   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:55.279945   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:55.321902   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:55.321949   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:55.351446   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:55.351474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:55.426688   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:55.426723   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:55.463472   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:55.463501   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:55.510565   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:55.510598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:55.580501   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:55.580534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:55.614574   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:55.614602   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:58.162969   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:58.173910   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:58.174019   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:58.202329   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:58.202352   59960 cri.go:89] found id: ""
	I1126 20:10:58.202360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:58.202415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.206274   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:58.206347   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:58.233721   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:58.233741   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:58.233745   59960 cri.go:89] found id: ""
	I1126 20:10:58.233753   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:58.233811   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.237802   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.242346   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:58.242419   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:58.271013   59960 cri.go:89] found id: ""
	I1126 20:10:58.271038   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.271047   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:58.271053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:58.271109   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:58.298515   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:58.298538   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:58.298553   59960 cri.go:89] found id: ""
	I1126 20:10:58.298560   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:58.298617   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.302497   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.306172   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:58.306241   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:58.331672   59960 cri.go:89] found id: ""
	I1126 20:10:58.331698   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.331707   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:58.331714   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:58.331819   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:58.359197   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:58.359219   59960 cri.go:89] found id: ""
	I1126 20:10:58.359228   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:58.359307   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.363274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:58.363346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:58.403777   59960 cri.go:89] found id: ""
	I1126 20:10:58.403804   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.403814   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:58.403829   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:58.403890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:58.504667   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:58.504702   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:58.517722   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:58.517750   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:58.589740   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:58.581328    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.582205    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.583896    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.584218    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.585780    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:58.581328    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.582205    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.583896    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.584218    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.585780    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:58.589761   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:58.589774   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:58.617621   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:58.617648   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:58.660238   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:58.660281   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:58.709585   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:58.709624   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:58.783550   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:58.783586   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:58.820181   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:58.820219   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:58.848533   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:58.848564   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:58.921350   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:58.921390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:01.453687   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:01.467262   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:01.467365   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:01.498662   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:01.498715   59960 cri.go:89] found id: ""
	I1126 20:11:01.498724   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:01.498785   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.504322   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:01.504445   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:01.545072   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:01.545098   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:01.545105   59960 cri.go:89] found id: ""
	I1126 20:11:01.545113   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:01.545185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.548993   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.552685   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:01.552797   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:01.582855   59960 cri.go:89] found id: ""
	I1126 20:11:01.582881   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.582891   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:01.582897   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:01.582954   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:01.613527   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:01.613548   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:01.613553   59960 cri.go:89] found id: ""
	I1126 20:11:01.613560   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:01.613629   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.618859   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.623550   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:01.623624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:01.660116   59960 cri.go:89] found id: ""
	I1126 20:11:01.660140   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.660149   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:01.660159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:01.660221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:01.692418   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:01.692442   59960 cri.go:89] found id: ""
	I1126 20:11:01.692450   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:01.692509   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.696379   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:01.696453   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:01.729407   59960 cri.go:89] found id: ""
	I1126 20:11:01.729430   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.729439   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:01.729447   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:01.729463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:01.784458   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:01.784492   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:01.872850   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:01.872886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:01.903039   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:01.903068   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:01.942057   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:01.942084   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:02.024475   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:02.024514   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:02.128096   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:02.128133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:02.199528   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:02.191565    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.192150    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.193873    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.194411    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.195999    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:02.191565    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.192150    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.193873    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.194411    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.195999    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:02.199554   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:02.199568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:02.226949   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:02.226985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:02.270517   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:02.270555   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:02.306879   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:02.306948   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:04.822921   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:04.834951   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:04.835018   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:04.862163   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:04.862219   59960 cri.go:89] found id: ""
	I1126 20:11:04.862244   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:04.862312   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.865957   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:04.866029   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:04.895638   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:04.895658   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:04.895663   59960 cri.go:89] found id: ""
	I1126 20:11:04.895669   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:04.895722   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.899645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.903838   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:04.903909   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:04.929326   59960 cri.go:89] found id: ""
	I1126 20:11:04.929389   59960 logs.go:282] 0 containers: []
	W1126 20:11:04.929422   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:04.929442   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:04.929522   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:04.956401   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:04.956472   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:04.956491   59960 cri.go:89] found id: ""
	I1126 20:11:04.956522   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:04.956593   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.960195   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.963812   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:04.963930   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:04.990366   59960 cri.go:89] found id: ""
	I1126 20:11:04.990387   59960 logs.go:282] 0 containers: []
	W1126 20:11:04.990395   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:04.990402   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:04.990468   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:05.019718   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:05.019752   59960 cri.go:89] found id: ""
	I1126 20:11:05.019762   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:05.019824   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:05.023681   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:05.023779   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:05.053886   59960 cri.go:89] found id: ""
	I1126 20:11:05.053915   59960 logs.go:282] 0 containers: []
	W1126 20:11:05.053953   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:05.053963   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:05.053994   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:05.152926   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:05.152963   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:05.165506   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:05.165534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:05.194915   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:05.194945   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:05.235104   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:05.235137   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:05.285215   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:05.285247   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:05.314134   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:05.314162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:05.341007   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:05.341034   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:05.418277   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:05.418313   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:05.491273   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:05.482790    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.483758    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.485510    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.486097    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.487714    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:05.482790    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.483758    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.485510    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.486097    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.487714    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:05.491294   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:05.491308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:05.552151   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:05.552187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:08.086064   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:08.097504   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:08.097574   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:08.126757   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:08.126780   59960 cri.go:89] found id: ""
	I1126 20:11:08.126789   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:08.126851   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.131043   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:08.131119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:08.158212   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:08.158274   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:08.158289   59960 cri.go:89] found id: ""
	I1126 20:11:08.158297   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:08.158360   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.162104   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.166980   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:08.167053   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:08.193258   59960 cri.go:89] found id: ""
	I1126 20:11:08.193290   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.193300   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:08.193307   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:08.193374   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:08.219187   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:08.219210   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:08.219216   59960 cri.go:89] found id: ""
	I1126 20:11:08.219234   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:08.219313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.223489   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.227150   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:08.227228   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:08.255318   59960 cri.go:89] found id: ""
	I1126 20:11:08.255340   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.255348   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:08.255355   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:08.255411   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:08.282171   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:08.282194   59960 cri.go:89] found id: ""
	I1126 20:11:08.282202   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:08.282273   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.285788   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:08.285852   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:08.315430   59960 cri.go:89] found id: ""
	I1126 20:11:08.315505   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.315538   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:08.315560   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:08.315580   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:08.345199   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:08.345268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:08.441184   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:08.441220   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:08.511176   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:08.500509    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.501151    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504004    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504546    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.506870    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:08.500509    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.501151    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504004    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504546    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.506870    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:08.511208   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:08.511222   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:08.543421   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:08.543450   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:08.604175   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:08.604207   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:08.632557   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:08.632623   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:08.663480   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:08.663506   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:08.675096   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:08.675127   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:08.713968   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:08.713998   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:08.759141   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:08.759176   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:11.351574   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:11.361875   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:11.361972   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:11.388446   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:11.388515   59960 cri.go:89] found id: ""
	I1126 20:11:11.388529   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:11.388594   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.392093   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:11.392176   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:11.421855   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:11.421875   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:11.421880   59960 cri.go:89] found id: ""
	I1126 20:11:11.421887   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:11.421974   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.425675   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.429670   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:11.429770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:11.455248   59960 cri.go:89] found id: ""
	I1126 20:11:11.455272   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.455280   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:11.455287   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:11.455349   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:11.481734   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:11.481755   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:11.481761   59960 cri.go:89] found id: ""
	I1126 20:11:11.481769   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:11.481841   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.485836   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.489303   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:11.489380   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:11.521985   59960 cri.go:89] found id: ""
	I1126 20:11:11.522011   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.522020   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:11.522036   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:11.522095   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:11.561668   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:11.561700   59960 cri.go:89] found id: ""
	I1126 20:11:11.561708   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:11.561772   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.565986   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:11.566063   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:11.594364   59960 cri.go:89] found id: ""
	I1126 20:11:11.594386   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.594395   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:11.594404   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:11.594440   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:11.639020   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:11.639057   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:11.709026   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:11.709063   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:11.739742   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:11.739771   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:11.806014   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:11.797164    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798194    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798970    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.800645    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.801154    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:11.797164    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798194    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798970    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.800645    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.801154    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:11.806036   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:11.806048   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:11.844958   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:11.844991   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:11.876607   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:11.876634   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:11.911651   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:11.911677   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:11.991136   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:11.991170   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:12.094606   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:12.094650   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:12.107579   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:12.107609   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:14.637133   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:14.648286   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:14.648355   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:14.678404   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:14.678427   59960 cri.go:89] found id: ""
	I1126 20:11:14.678435   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:14.678495   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.682257   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:14.682330   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:14.713744   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:14.713765   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:14.713770   59960 cri.go:89] found id: ""
	I1126 20:11:14.713777   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:14.713835   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.718000   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.721792   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:14.721916   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:14.753701   59960 cri.go:89] found id: ""
	I1126 20:11:14.753767   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.753793   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:14.753812   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:14.753951   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:14.782584   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:14.782609   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:14.782615   59960 cri.go:89] found id: ""
	I1126 20:11:14.782622   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:14.782679   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.786288   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.790091   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:14.790165   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:14.816545   59960 cri.go:89] found id: ""
	I1126 20:11:14.816570   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.816579   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:14.816586   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:14.816642   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:14.846080   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:14.846100   59960 cri.go:89] found id: ""
	I1126 20:11:14.846108   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:14.846166   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.849789   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:14.849880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:14.876460   59960 cri.go:89] found id: ""
	I1126 20:11:14.876491   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.876500   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:14.876508   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:14.876518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:14.951236   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:14.951274   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:14.983322   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:14.983350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:15.061107   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:15.051102    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.052170    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.053243    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.054378    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.056334    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:15.051102    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.052170    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.053243    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.054378    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.056334    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:15.061129   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:15.061144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:15.097557   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:15.097587   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:15.138293   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:15.138326   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:15.168503   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:15.168532   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:15.267115   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:15.267150   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:15.279584   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:15.279615   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:15.326150   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:15.326184   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:15.389193   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:15.389226   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:17.918406   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:17.929053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:17.929122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:17.953884   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:17.953945   59960 cri.go:89] found id: ""
	I1126 20:11:17.953954   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:17.954015   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.957395   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:17.957465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:17.983711   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:17.983731   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:17.983735   59960 cri.go:89] found id: ""
	I1126 20:11:17.983742   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:17.983795   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.987660   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.991154   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:17.991224   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:18.019969   59960 cri.go:89] found id: ""
	I1126 20:11:18.019998   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.020008   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:18.020015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:18.020073   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:18.061149   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:18.061172   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:18.061178   59960 cri.go:89] found id: ""
	I1126 20:11:18.061186   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:18.061246   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.065578   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.069815   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:18.069885   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:18.096457   59960 cri.go:89] found id: ""
	I1126 20:11:18.096479   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.096487   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:18.096494   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:18.096554   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:18.124303   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:18.124367   59960 cri.go:89] found id: ""
	I1126 20:11:18.124392   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:18.124471   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.130707   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:18.130839   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:18.156714   59960 cri.go:89] found id: ""
	I1126 20:11:18.156740   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.156750   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:18.156759   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:18.156773   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:18.233800   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:18.233837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:18.264943   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:18.264973   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:18.343435   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:18.335872    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.336444    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.337906    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.338530    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.339816    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:18.335872    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.336444    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.337906    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.338530    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.339816    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:18.343458   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:18.343470   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:18.372998   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:18.373026   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:18.416461   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:18.416495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:18.445233   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:18.445263   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:18.545748   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:18.545787   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:18.557806   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:18.557835   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:18.622509   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:18.622542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:18.707610   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:18.707689   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:21.236452   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:21.247662   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:21.247729   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:21.276004   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:21.276030   59960 cri.go:89] found id: ""
	I1126 20:11:21.276038   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:21.276125   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.279851   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:21.279945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:21.309267   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:21.309291   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:21.309297   59960 cri.go:89] found id: ""
	I1126 20:11:21.309304   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:21.309359   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.313384   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.317026   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:21.317099   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:21.347773   59960 cri.go:89] found id: ""
	I1126 20:11:21.347799   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.347807   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:21.347817   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:21.347901   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:21.389878   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:21.389898   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:21.389902   59960 cri.go:89] found id: ""
	I1126 20:11:21.389910   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:21.390028   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.396218   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.405704   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:21.405823   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:21.458505   59960 cri.go:89] found id: ""
	I1126 20:11:21.458573   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.458605   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:21.458635   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:21.458731   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:21.486896   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:21.486961   59960 cri.go:89] found id: ""
	I1126 20:11:21.486983   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:21.487052   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.490729   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:21.490845   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:21.521776   59960 cri.go:89] found id: ""
	I1126 20:11:21.521798   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.521806   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:21.521815   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:21.521827   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:21.540126   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:21.540201   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:21.612034   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:21.604355    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.605075    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.606757    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.607410    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.608381    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:21.604355    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.605075    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.606757    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.607410    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.608381    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:21.612058   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:21.612072   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:21.658622   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:21.658657   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:21.707807   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:21.707844   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:21.769271   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:21.769306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:21.801295   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:21.801325   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:21.896605   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:21.896639   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:21.929176   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:21.929205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:21.967857   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:21.967884   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:22.001350   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:22.001375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:24.595423   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:24.606910   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:24.606980   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:24.638795   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:24.638819   59960 cri.go:89] found id: ""
	I1126 20:11:24.638827   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:24.638885   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.642601   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:24.642677   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:24.709965   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:24.709984   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:24.709989   59960 cri.go:89] found id: ""
	I1126 20:11:24.709996   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:24.710075   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.714848   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.719509   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:24.719668   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:24.756426   59960 cri.go:89] found id: ""
	I1126 20:11:24.756497   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.756521   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:24.756540   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:24.756658   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:24.803189   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:24.803256   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:24.803274   59960 cri.go:89] found id: ""
	I1126 20:11:24.803295   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:24.803379   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.808196   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.812071   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:24.812194   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:24.852305   59960 cri.go:89] found id: ""
	I1126 20:11:24.852378   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.852408   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:24.852429   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:24.852520   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:24.889194   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:24.889263   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:24.889294   59960 cri.go:89] found id: ""
	I1126 20:11:24.889320   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:24.889413   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.893347   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.897224   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:24.897334   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:24.930230   59960 cri.go:89] found id: ""
	I1126 20:11:24.930304   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.930333   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:24.930344   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:24.930371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:25.035563   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:25.035604   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:25.054082   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:25.054112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:25.096053   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:25.096081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:25.145970   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:25.146007   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:25.185648   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:25.185678   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:25.214168   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:25.214199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:25.247077   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:25.247106   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:25.338812   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:25.330325    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.331301    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.332972    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.333487    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.335076    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:25.330325    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.331301    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.332972    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.333487    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.335076    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:25.338839   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:25.338854   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:25.379564   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:25.379600   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:25.447694   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:25.447730   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:25.472568   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:25.472598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:28.058550   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:28.076007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:28.076082   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:28.106329   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:28.106351   59960 cri.go:89] found id: ""
	I1126 20:11:28.106360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:28.106418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.110514   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:28.110591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:28.140757   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:28.140777   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:28.140782   59960 cri.go:89] found id: ""
	I1126 20:11:28.140789   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:28.140842   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.144844   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.148401   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:28.148473   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:28.174921   59960 cri.go:89] found id: ""
	I1126 20:11:28.174944   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.174953   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:28.174959   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:28.175022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:28.202405   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:28.202425   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:28.202429   59960 cri.go:89] found id: ""
	I1126 20:11:28.202436   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:28.202491   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.207455   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.211480   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:28.211548   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:28.239676   59960 cri.go:89] found id: ""
	I1126 20:11:28.239749   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.239773   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:28.239793   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:28.239857   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:28.269256   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:28.269277   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:28.269282   59960 cri.go:89] found id: ""
	I1126 20:11:28.269289   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:28.269344   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.273004   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.276329   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:28.276398   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:28.302206   59960 cri.go:89] found id: ""
	I1126 20:11:28.302272   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.302298   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:28.302321   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:28.302363   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:28.332034   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:28.332062   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:28.376567   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:28.376603   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:28.441530   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:28.441568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:28.468188   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:28.468219   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:28.544745   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:28.544780   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:28.590841   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:28.590870   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:28.603163   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:28.603194   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:28.675368   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:28.666467    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.667143    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.668892    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.669848    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.671529    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:28.666467    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.667143    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.668892    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.669848    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.671529    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:28.675390   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:28.675403   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:28.716129   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:28.716160   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:28.746889   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:28.746916   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:28.784649   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:28.784678   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:31.386032   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:31.396663   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:31.396729   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:31.424252   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:31.424274   59960 cri.go:89] found id: ""
	I1126 20:11:31.424282   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:31.424337   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.427909   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:31.427983   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:31.459053   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:31.459075   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:31.459080   59960 cri.go:89] found id: ""
	I1126 20:11:31.459088   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:31.459148   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.462802   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.466564   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:31.466687   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:31.497981   59960 cri.go:89] found id: ""
	I1126 20:11:31.498003   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.498012   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:31.498018   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:31.498110   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:31.526027   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:31.526052   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:31.526057   59960 cri.go:89] found id: ""
	I1126 20:11:31.526065   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:31.526170   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.529987   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.534855   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:31.534945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:31.563109   59960 cri.go:89] found id: ""
	I1126 20:11:31.563169   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.563198   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:31.563219   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:31.563293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:31.589243   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:31.589265   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:31.589270   59960 cri.go:89] found id: ""
	I1126 20:11:31.589278   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:31.589354   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.593459   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.596946   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:31.597021   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:31.623525   59960 cri.go:89] found id: ""
	I1126 20:11:31.623558   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.623567   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:31.623576   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:31.623587   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:31.652294   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:31.652373   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:31.735258   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:31.735294   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:31.768608   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:31.768683   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:31.870428   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:31.870508   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:31.897014   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:31.897042   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:32.001263   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:32.001299   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:32.038474   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:32.038514   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:32.052890   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:32.052925   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:32.157895   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:32.150135    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.150798    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152292    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152811    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.154388    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:32.150135    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.150798    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152292    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152811    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.154388    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:32.157991   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:32.158015   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:32.202276   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:32.202312   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:32.246886   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:32.246920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:34.774920   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:34.785509   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:34.785619   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:34.817587   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:34.817656   59960 cri.go:89] found id: ""
	I1126 20:11:34.817682   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:34.817753   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.821524   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:34.821594   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:34.849130   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:34.849154   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:34.849159   59960 cri.go:89] found id: ""
	I1126 20:11:34.849167   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:34.849233   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.852945   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.856601   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:34.856684   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:34.883375   59960 cri.go:89] found id: ""
	I1126 20:11:34.883398   59960 logs.go:282] 0 containers: []
	W1126 20:11:34.883412   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:34.883450   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:34.883524   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:34.909798   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:34.909821   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:34.909826   59960 cri.go:89] found id: ""
	I1126 20:11:34.909834   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:34.909888   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.913552   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.916964   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:34.917033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:34.949567   59960 cri.go:89] found id: ""
	I1126 20:11:34.949592   59960 logs.go:282] 0 containers: []
	W1126 20:11:34.949601   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:34.949608   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:34.949663   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:34.977128   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:34.977150   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:34.977156   59960 cri.go:89] found id: ""
	I1126 20:11:34.977163   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:34.977220   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.981001   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.984842   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:34.984957   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:35.012427   59960 cri.go:89] found id: ""
	I1126 20:11:35.012460   59960 logs.go:282] 0 containers: []
	W1126 20:11:35.012470   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:35.012479   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:35.012493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:35.040355   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:35.040396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:35.085028   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:35.085064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:35.113614   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:35.113649   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:35.153880   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:35.153911   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:35.198643   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:35.198675   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:35.268315   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:35.268350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:35.295776   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:35.295804   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:35.376804   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:35.376847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:35.482429   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:35.482467   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:35.495585   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:35.495620   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:35.570301   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:35.562818    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.563633    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565195    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565472    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.566934    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:35.562818    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.563633    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565195    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565472    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.566934    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:35.570323   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:35.570336   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.104089   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:38.117181   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:38.117256   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:38.149986   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:38.150007   59960 cri.go:89] found id: ""
	I1126 20:11:38.150015   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:38.150071   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.153769   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:38.153836   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:38.181424   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:38.181445   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:38.181450   59960 cri.go:89] found id: ""
	I1126 20:11:38.181457   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:38.181514   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.186065   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.189965   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:38.190088   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:38.222377   59960 cri.go:89] found id: ""
	I1126 20:11:38.222403   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.222412   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:38.222418   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:38.222512   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:38.251289   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:38.251308   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:38.251312   59960 cri.go:89] found id: ""
	I1126 20:11:38.251319   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:38.251376   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.256455   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.260117   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:38.260191   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:38.285970   59960 cri.go:89] found id: ""
	I1126 20:11:38.285993   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.286001   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:38.286007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:38.286071   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:38.316333   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.316352   59960 cri.go:89] found id: ""
	I1126 20:11:38.316360   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:38.316418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.320056   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:38.320141   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:38.346321   59960 cri.go:89] found id: ""
	I1126 20:11:38.346343   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.346355   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:38.346365   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:38.346377   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:38.373397   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:38.373424   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:38.425362   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:38.425395   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.453015   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:38.453091   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:38.532623   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:38.532697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:38.633361   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:38.633397   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:38.645846   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:38.645873   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:38.703411   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:38.703444   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:38.767512   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:38.767547   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:38.796976   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:38.797004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:38.829009   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:38.829038   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:38.898466   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:38.890004    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.890695    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892444    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892921    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.894201    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:38.890004    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.890695    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892444    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892921    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.894201    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:41.398722   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:41.410132   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:41.410201   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:41.438116   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:41.438139   59960 cri.go:89] found id: ""
	I1126 20:11:41.438148   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:41.438205   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.442017   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:41.442090   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:41.469903   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:41.469958   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:41.469963   59960 cri.go:89] found id: ""
	I1126 20:11:41.469970   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:41.470027   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.474067   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.478045   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:41.478121   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:41.505356   59960 cri.go:89] found id: ""
	I1126 20:11:41.505421   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.505446   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:41.505473   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:41.505547   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:41.539013   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:41.539078   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:41.539097   59960 cri.go:89] found id: ""
	I1126 20:11:41.539120   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:41.539192   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.545082   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.548706   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:41.548780   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:41.575834   59960 cri.go:89] found id: ""
	I1126 20:11:41.575859   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.575867   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:41.575874   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:41.575934   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:41.611347   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:41.611373   59960 cri.go:89] found id: ""
	I1126 20:11:41.611381   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:41.611452   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.615789   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:41.615865   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:41.641022   59960 cri.go:89] found id: ""
	I1126 20:11:41.641047   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.641057   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:41.641066   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:41.641078   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:41.742347   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:41.742381   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:41.754134   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:41.754164   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:41.831601   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:41.821574    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.822287    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.823756    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.824699    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.826433    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:41.821574    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.822287    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.823756    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.824699    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.826433    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:41.831624   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:41.831637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:41.860096   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:41.860125   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:41.910250   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:41.910285   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:41.980123   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:41.980161   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:42.010802   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:42.010829   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:42.106028   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:42.106070   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:42.164514   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:42.164559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:42.271103   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:42.271151   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:44.839838   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:44.850546   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:44.850618   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:44.876918   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:44.876988   59960 cri.go:89] found id: ""
	I1126 20:11:44.877011   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:44.877094   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.881043   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:44.881125   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:44.911219   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:44.911239   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:44.911243   59960 cri.go:89] found id: ""
	I1126 20:11:44.911250   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:44.911304   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.914984   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.918517   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:44.918591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:44.948367   59960 cri.go:89] found id: ""
	I1126 20:11:44.948393   59960 logs.go:282] 0 containers: []
	W1126 20:11:44.948403   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:44.948410   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:44.948488   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:44.979725   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:44.979749   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:44.979762   59960 cri.go:89] found id: ""
	I1126 20:11:44.979770   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:44.979825   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.983672   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.987318   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:44.987393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:45.013302   59960 cri.go:89] found id: ""
	I1126 20:11:45.013326   59960 logs.go:282] 0 containers: []
	W1126 20:11:45.013335   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:45.013342   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:45.013400   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:45.055627   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:45.055649   59960 cri.go:89] found id: ""
	I1126 20:11:45.055657   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:45.055726   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:45.085558   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:45.085645   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:45.151023   59960 cri.go:89] found id: ""
	I1126 20:11:45.151097   59960 logs.go:282] 0 containers: []
	W1126 20:11:45.151125   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:45.151149   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:45.151189   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:45.299197   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:45.299495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:45.414522   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:45.414561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:45.426305   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:45.426334   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:45.498361   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:45.490138    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.490855    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.492369    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.493032    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.494581    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:45.490138    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.490855    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.492369    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.493032    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.494581    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:45.498385   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:45.498406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:45.544282   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:45.544315   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:45.572601   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:45.572628   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:45.618675   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:45.618704   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:45.644699   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:45.644729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:45.692766   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:45.692847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:45.768264   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:45.768298   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.298071   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:48.309786   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:48.309955   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:48.338906   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:48.338929   59960 cri.go:89] found id: ""
	I1126 20:11:48.338938   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:48.339013   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.342703   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:48.342807   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:48.373459   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:48.373483   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:48.373489   59960 cri.go:89] found id: ""
	I1126 20:11:48.373497   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:48.373571   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.377243   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.380907   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:48.380978   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:48.410171   59960 cri.go:89] found id: ""
	I1126 20:11:48.410194   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.410203   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:48.410210   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:48.410269   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:48.438118   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:48.438141   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.438146   59960 cri.go:89] found id: ""
	I1126 20:11:48.438153   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:48.438208   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.441706   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.445239   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:48.445331   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:48.471795   59960 cri.go:89] found id: ""
	I1126 20:11:48.471818   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.471827   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:48.471834   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:48.471894   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:48.499373   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:48.499444   59960 cri.go:89] found id: ""
	I1126 20:11:48.499459   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:48.499520   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.503413   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:48.503486   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:48.530399   59960 cri.go:89] found id: ""
	I1126 20:11:48.530421   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.530435   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:48.530450   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:48.530464   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:48.571849   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:48.571882   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:48.658179   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:48.658279   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:48.689018   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:48.689045   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:48.763174   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:48.763207   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:48.778567   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:48.778596   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:48.827328   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:48.827365   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.857288   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:48.857365   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:48.888507   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:48.888539   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:48.988930   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:48.988967   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:49.069225   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:49.055449    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.056233    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.057886    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.058530    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.060083    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:49.055449    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.056233    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.057886    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.058530    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.060083    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:49.069248   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:49.069262   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:51.595258   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:51.606745   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:51.606819   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:51.636395   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:51.636416   59960 cri.go:89] found id: ""
	I1126 20:11:51.636430   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:51.636488   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.640040   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:51.640115   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:51.676792   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:51.676812   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:51.676816   59960 cri.go:89] found id: ""
	I1126 20:11:51.676824   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:51.676877   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.681110   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.685068   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:51.685183   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:51.720013   59960 cri.go:89] found id: ""
	I1126 20:11:51.720038   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.720047   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:51.720054   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:51.720111   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:51.748336   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:51.748360   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:51.748375   59960 cri.go:89] found id: ""
	I1126 20:11:51.748383   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:51.748439   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.752267   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.756170   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:51.756241   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:51.783057   59960 cri.go:89] found id: ""
	I1126 20:11:51.783086   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.783095   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:51.783101   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:51.783163   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:51.811250   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:51.811272   59960 cri.go:89] found id: ""
	I1126 20:11:51.811282   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:51.811338   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.815120   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:51.815232   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:51.846026   59960 cri.go:89] found id: ""
	I1126 20:11:51.846049   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.846064   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:51.846074   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:51.846086   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:51.890348   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:51.890380   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:51.920851   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:51.920922   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:51.977107   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:51.977140   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:52.060932   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:52.060981   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:52.093050   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:52.093078   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:52.176431   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:52.176468   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:52.215980   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:52.216012   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:52.327858   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:52.327901   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:52.340252   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:52.340285   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:52.418993   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:52.410090    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.410776    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.412508    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.413095    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.414685    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:52.410090    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.410776    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.412508    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.413095    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.414685    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:52.419016   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:52.419029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:54.944539   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:54.955542   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:54.955615   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:54.986048   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:54.986074   59960 cri.go:89] found id: ""
	I1126 20:11:54.986083   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:54.986139   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:54.989757   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:54.989829   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:55.016053   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:55.016085   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:55.016091   59960 cri.go:89] found id: ""
	I1126 20:11:55.016099   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:55.016174   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.019787   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.023250   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:55.023321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:55.069450   59960 cri.go:89] found id: ""
	I1126 20:11:55.069473   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.069482   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:55.069489   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:55.069572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:55.098641   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:55.098664   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:55.098669   59960 cri.go:89] found id: ""
	I1126 20:11:55.098676   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:55.098732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.102435   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.106227   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:55.106351   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:55.138121   59960 cri.go:89] found id: ""
	I1126 20:11:55.138145   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.138154   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:55.138174   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:55.138236   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:55.167513   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:55.167544   59960 cri.go:89] found id: ""
	I1126 20:11:55.167553   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:55.167618   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.171313   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:55.171381   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:55.202786   59960 cri.go:89] found id: ""
	I1126 20:11:55.202813   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.202822   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:55.202832   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:55.202866   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:55.302444   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:55.302521   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:55.340281   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:55.340307   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:55.380642   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:55.380671   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:55.413529   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:55.413559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:55.441562   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:55.441590   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:55.518521   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:55.518561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:55.558444   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:55.558478   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:55.571280   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:55.571312   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:55.640808   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:55.631279    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.631827    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.633724    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.634294    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.636622    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:55.631279    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.631827    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.633724    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.634294    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.636622    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:55.640840   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:55.640855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:55.687489   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:55.687525   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.274871   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:58.285429   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:58.285499   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:58.313375   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:58.313399   59960 cri.go:89] found id: ""
	I1126 20:11:58.313406   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:58.313459   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.316973   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:58.317046   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:58.343195   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:58.343222   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:58.343233   59960 cri.go:89] found id: ""
	I1126 20:11:58.343241   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:58.343299   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.346903   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.350464   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:58.350532   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:58.389630   59960 cri.go:89] found id: ""
	I1126 20:11:58.389651   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.389659   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:58.389666   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:58.389727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:58.417327   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.417347   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:58.417351   59960 cri.go:89] found id: ""
	I1126 20:11:58.417358   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:58.417415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.421999   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.425800   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:58.425864   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:58.452945   59960 cri.go:89] found id: ""
	I1126 20:11:58.452969   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.452977   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:58.452983   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:58.453043   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:58.488167   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:58.488198   59960 cri.go:89] found id: ""
	I1126 20:11:58.488207   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:58.488290   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.492158   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:58.492254   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:58.519792   59960 cri.go:89] found id: ""
	I1126 20:11:58.519815   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.519824   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:58.519833   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:58.519845   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:58.539152   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:58.539178   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:58.611844   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:58.602656    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.604433    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.605264    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.606165    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.607783    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:58.602656    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.604433    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.605264    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.606165    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.607783    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:58.611916   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:58.611936   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:58.653684   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:58.653755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:58.701629   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:58.701698   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:58.797678   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:58.797712   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:58.826943   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:58.826971   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:58.870347   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:58.870382   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.935086   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:58.935124   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:58.968825   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:58.968856   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:58.997914   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:58.998030   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:01.577720   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:01.589568   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:01.589642   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:01.621435   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:01.621457   59960 cri.go:89] found id: ""
	I1126 20:12:01.621466   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:01.621521   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.625557   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:01.625630   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:01.653424   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:01.653447   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:01.653452   59960 cri.go:89] found id: ""
	I1126 20:12:01.653459   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:01.653520   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.658113   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.663163   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:01.663279   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:01.690617   59960 cri.go:89] found id: ""
	I1126 20:12:01.690692   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.690707   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:01.690714   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:01.690776   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:01.721669   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:01.721691   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:01.721696   59960 cri.go:89] found id: ""
	I1126 20:12:01.721705   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:01.721760   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.725774   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.729528   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:01.729608   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:01.755428   59960 cri.go:89] found id: ""
	I1126 20:12:01.755452   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.755461   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:01.755468   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:01.755529   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:01.783818   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:01.783841   59960 cri.go:89] found id: ""
	I1126 20:12:01.783849   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:01.783905   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.787656   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:01.787726   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:01.815958   59960 cri.go:89] found id: ""
	I1126 20:12:01.816025   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.816050   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:01.816067   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:01.816080   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:01.867560   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:01.867592   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:01.932205   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:01.932256   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:02.002408   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:02.002441   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:02.051577   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:02.051612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:02.088918   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:02.088948   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:02.168080   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:02.158735    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.159253    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162045    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162706    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.164462    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:02.158735    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.159253    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162045    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162706    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.164462    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:02.168105   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:02.168119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:02.244385   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:02.244435   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:02.282263   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:02.282293   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:02.383774   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:02.383810   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:02.399682   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:02.399712   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:04.928429   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:04.939418   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:04.939502   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:04.967318   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:04.967344   59960 cri.go:89] found id: ""
	I1126 20:12:04.967352   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:04.967406   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:04.971172   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:04.971242   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:04.998636   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:04.998660   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:04.998666   59960 cri.go:89] found id: ""
	I1126 20:12:04.998673   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:04.998728   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.002734   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.006234   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:05.006304   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:05.031905   59960 cri.go:89] found id: ""
	I1126 20:12:05.031931   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.031948   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:05.031954   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:05.032022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:05.062024   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:05.062047   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:05.062053   59960 cri.go:89] found id: ""
	I1126 20:12:05.062061   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:05.062119   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.066633   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.070769   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:05.070894   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:05.098088   59960 cri.go:89] found id: ""
	I1126 20:12:05.098113   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.098123   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:05.098130   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:05.098213   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:05.131371   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:05.131394   59960 cri.go:89] found id: ""
	I1126 20:12:05.131403   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:05.131477   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.135270   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:05.135372   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:05.162342   59960 cri.go:89] found id: ""
	I1126 20:12:05.162365   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.162374   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:05.162383   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:05.162395   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:05.235501   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:05.227170    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.227750    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229253    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229720    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.231198    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:05.227170    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.227750    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229253    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229720    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.231198    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:05.235522   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:05.235536   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:05.263102   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:05.263128   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:05.302111   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:05.302144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:05.333187   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:05.333216   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:05.359477   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:05.359505   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:05.438760   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:05.438798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:05.451777   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:05.451807   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:05.498508   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:05.498543   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:05.568808   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:05.568843   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:05.616879   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:05.616909   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:08.220414   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:08.231126   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:08.231199   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:08.258035   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:08.258105   59960 cri.go:89] found id: ""
	I1126 20:12:08.258125   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:08.258192   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.262176   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:08.262249   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:08.289710   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:08.289733   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:08.289739   59960 cri.go:89] found id: ""
	I1126 20:12:08.289750   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:08.289805   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.293485   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.297802   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:08.297880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:08.327209   59960 cri.go:89] found id: ""
	I1126 20:12:08.327234   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.327243   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:08.327263   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:08.327336   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:08.357819   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:08.357840   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:08.357845   59960 cri.go:89] found id: ""
	I1126 20:12:08.357852   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:08.357906   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.361705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.365237   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:08.365328   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:08.394319   59960 cri.go:89] found id: ""
	I1126 20:12:08.394383   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.394399   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:08.394406   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:08.394480   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:08.420463   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:08.420527   59960 cri.go:89] found id: ""
	I1126 20:12:08.420553   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:08.420638   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.424335   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:08.424450   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:08.452961   59960 cri.go:89] found id: ""
	I1126 20:12:08.452986   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.452995   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:08.453003   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:08.453014   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:08.493988   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:08.494022   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:08.544465   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:08.544499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:08.574385   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:08.574413   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:08.586334   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:08.586371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:08.667454   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:08.650997    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.659303    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.660307    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662037    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662374    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:08.650997    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.659303    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.660307    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662037    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662374    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:08.667486   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:08.667499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:08.699349   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:08.699378   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:08.764949   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:08.764985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:08.796757   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:08.796785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:08.880624   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:08.880660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:08.914640   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:08.914667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:11.513808   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:11.524482   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:11.524580   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:11.558859   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:11.558902   59960 cri.go:89] found id: ""
	I1126 20:12:11.558911   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:11.558970   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.562673   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:11.562747   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:11.588932   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:11.588951   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:11.588956   59960 cri.go:89] found id: ""
	I1126 20:12:11.588963   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:11.589017   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.592810   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.596570   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:11.596643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:11.623065   59960 cri.go:89] found id: ""
	I1126 20:12:11.623145   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.623161   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:11.623169   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:11.623229   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:11.650581   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:11.650605   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:11.650610   59960 cri.go:89] found id: ""
	I1126 20:12:11.650618   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:11.650671   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.655559   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.659747   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:11.659817   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:11.687296   59960 cri.go:89] found id: ""
	I1126 20:12:11.687322   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.687331   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:11.687337   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:11.687396   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:11.720511   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:11.720579   59960 cri.go:89] found id: ""
	I1126 20:12:11.720617   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:11.720708   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.724437   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:11.724506   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:11.749548   59960 cri.go:89] found id: ""
	I1126 20:12:11.749582   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.749591   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:11.749601   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:11.749612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:11.844417   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:11.844451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:11.856841   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:11.856870   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:11.927039   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:11.919031    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.919434    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921013    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921770    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.923409    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:11.919031    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.919434    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921013    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921770    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.923409    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:11.927072   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:11.927085   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:11.952749   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:11.952778   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:11.979828   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:11.979854   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:12.054969   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:12.055007   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:12.096829   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:12.096861   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:12.139040   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:12.139073   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:12.188630   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:12.188665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:12.261491   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:12.261525   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:14.793314   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:14.805690   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:14.805792   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:14.834480   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:14.834550   59960 cri.go:89] found id: ""
	I1126 20:12:14.834563   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:14.834624   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.838451   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:14.838546   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:14.865258   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:14.865280   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:14.865288   59960 cri.go:89] found id: ""
	I1126 20:12:14.865296   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:14.865369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.869042   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.872598   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:14.872673   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:14.899453   59960 cri.go:89] found id: ""
	I1126 20:12:14.899475   59960 logs.go:282] 0 containers: []
	W1126 20:12:14.899484   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:14.899491   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:14.899553   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:14.927802   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:14.927830   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:14.927837   59960 cri.go:89] found id: ""
	I1126 20:12:14.927845   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:14.927940   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.932558   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.936133   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:14.936204   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:14.961102   59960 cri.go:89] found id: ""
	I1126 20:12:14.961173   59960 logs.go:282] 0 containers: []
	W1126 20:12:14.961195   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:14.961215   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:14.961302   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:15.002363   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:15.002384   59960 cri.go:89] found id: ""
	I1126 20:12:15.002393   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:15.002447   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:15.006142   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:15.006212   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:15.032134   59960 cri.go:89] found id: ""
	I1126 20:12:15.032199   59960 logs.go:282] 0 containers: []
	W1126 20:12:15.032214   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:15.032224   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:15.032240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:15.081347   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:15.081379   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:15.180623   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:15.180658   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:15.209901   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:15.209962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:15.262607   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:15.262636   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:15.288510   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:15.288544   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:15.367680   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:15.367714   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:15.412204   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:15.412231   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:15.424270   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:15.424300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:15.503073   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:15.494667    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.495283    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.496993    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.497515    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.498972    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:15.494667    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.495283    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.496993    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.497515    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.498972    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:15.503139   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:15.503167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:15.550262   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:15.550296   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.118444   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:18.129864   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:18.129981   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:18.156819   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:18.156838   59960 cri.go:89] found id: ""
	I1126 20:12:18.156846   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:18.156904   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.161071   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:18.161149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:18.189616   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:18.189639   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:18.189644   59960 cri.go:89] found id: ""
	I1126 20:12:18.189651   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:18.189705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.193599   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.197622   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:18.197702   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:18.229000   59960 cri.go:89] found id: ""
	I1126 20:12:18.229024   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.229034   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:18.229041   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:18.229097   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:18.258704   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.258728   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:18.258734   59960 cri.go:89] found id: ""
	I1126 20:12:18.258741   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:18.258799   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.262617   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.266630   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:18.266703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:18.294498   59960 cri.go:89] found id: ""
	I1126 20:12:18.294520   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.294528   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:18.294535   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:18.294592   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:18.321461   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:18.321534   59960 cri.go:89] found id: ""
	I1126 20:12:18.321556   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:18.321645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.325350   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:18.325460   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:18.351492   59960 cri.go:89] found id: ""
	I1126 20:12:18.351553   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.351579   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:18.351599   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:18.351637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:18.407171   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:18.407205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:18.439080   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:18.439112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:18.547958   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:18.547995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:18.619721   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:18.609846    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.610654    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612119    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612768    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.614366    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:18.609846    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.610654    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612119    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612768    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.614366    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:18.619742   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:18.619754   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:18.645098   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:18.645177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:18.682606   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:18.682639   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.763422   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:18.763453   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:18.795735   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:18.795762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:18.822004   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:18.822035   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:18.896691   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:18.896727   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:21.410083   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:21.420840   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:21.420938   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:21.446994   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:21.447016   59960 cri.go:89] found id: ""
	I1126 20:12:21.447024   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:21.447102   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.450650   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:21.450721   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:21.479530   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:21.479554   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:21.479559   59960 cri.go:89] found id: ""
	I1126 20:12:21.479566   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:21.479639   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.483856   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.487301   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:21.487396   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:21.514632   59960 cri.go:89] found id: ""
	I1126 20:12:21.514655   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.514664   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:21.514677   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:21.514734   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:21.552676   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:21.552697   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:21.552701   59960 cri.go:89] found id: ""
	I1126 20:12:21.552708   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:21.552764   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.558562   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.562503   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:21.562570   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:21.592027   59960 cri.go:89] found id: ""
	I1126 20:12:21.592051   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.592059   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:21.592065   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:21.592122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:21.622050   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:21.622069   59960 cri.go:89] found id: ""
	I1126 20:12:21.622078   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:21.622133   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.625979   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:21.626057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:21.659506   59960 cri.go:89] found id: ""
	I1126 20:12:21.659530   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.659539   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:21.659548   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:21.659561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:21.692379   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:21.692406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:21.765021   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:21.765055   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:21.839116   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:21.830975    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.831759    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833349    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833904    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.835476    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:21.830975    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.831759    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833349    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833904    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.835476    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:21.839140   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:21.839153   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:21.865386   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:21.865413   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:21.904223   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:21.904257   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:21.949513   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:21.949545   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:21.975811   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:21.975838   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:22.009804   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:22.009830   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:22.114067   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:22.114107   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:22.129823   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:22.129850   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:24.699777   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:24.710717   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:24.710835   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:24.737361   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:24.737395   59960 cri.go:89] found id: ""
	I1126 20:12:24.737404   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:24.737467   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.741100   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:24.741181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:24.766942   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:24.767005   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:24.767023   59960 cri.go:89] found id: ""
	I1126 20:12:24.767038   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:24.767117   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.771423   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.775599   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:24.775679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:24.807211   59960 cri.go:89] found id: ""
	I1126 20:12:24.807238   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.807247   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:24.807254   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:24.807313   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:24.839448   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:24.839474   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:24.839480   59960 cri.go:89] found id: ""
	I1126 20:12:24.839487   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:24.839543   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.843345   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.846785   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:24.846859   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:24.875974   59960 cri.go:89] found id: ""
	I1126 20:12:24.875999   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.876008   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:24.876015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:24.876074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:24.904623   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:24.904646   59960 cri.go:89] found id: ""
	I1126 20:12:24.904655   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:24.904729   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.908536   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:24.908626   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:24.937367   59960 cri.go:89] found id: ""
	I1126 20:12:24.937448   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.937471   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:24.937494   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:24.937534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:24.976827   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:24.976864   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:25.024594   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:25.024629   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:25.103663   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:25.103701   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:25.184899   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:25.184934   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:25.288663   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:25.288696   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:25.303312   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:25.303340   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:25.371319   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:25.361818    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.362509    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.364256    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.365013    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.366870    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:25.361818    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.362509    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.364256    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.365013    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.366870    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:25.371342   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:25.371357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:25.399886   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:25.399954   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:25.431130   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:25.431162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:25.457679   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:25.457758   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:27.990400   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:28.001290   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:28.001359   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:28.027402   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:28.027424   59960 cri.go:89] found id: ""
	I1126 20:12:28.027441   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:28.027501   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.030992   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:28.031083   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:28.072993   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:28.073014   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:28.073019   59960 cri.go:89] found id: ""
	I1126 20:12:28.073026   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:28.073084   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.076846   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.080628   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:28.080762   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:28.107876   59960 cri.go:89] found id: ""
	I1126 20:12:28.107902   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.107911   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:28.107918   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:28.107993   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:28.135277   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:28.135299   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:28.135305   59960 cri.go:89] found id: ""
	I1126 20:12:28.135312   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:28.135369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.139340   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.143115   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:28.143193   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:28.179129   59960 cri.go:89] found id: ""
	I1126 20:12:28.179230   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.179259   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:28.179273   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:28.179346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:28.208432   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:28.208453   59960 cri.go:89] found id: ""
	I1126 20:12:28.208465   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:28.208523   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.212104   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:28.212174   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:28.239214   59960 cri.go:89] found id: ""
	I1126 20:12:28.239290   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.239307   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:28.239317   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:28.239331   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:28.311306   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:28.311342   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:28.340943   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:28.340972   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:28.376088   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:28.376113   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:28.447578   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:28.440425    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.440837    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442342    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442644    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.444078    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:28.440425    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.440837    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442342    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442644    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.444078    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:28.447601   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:28.447613   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:28.494672   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:28.494707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:28.524817   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:28.524847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:28.611534   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:28.611568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:28.717586   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:28.717621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:28.729869   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:28.729894   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:28.755777   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:28.755805   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.304943   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:31.316121   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:31.316189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:31.344914   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:31.344936   59960 cri.go:89] found id: ""
	I1126 20:12:31.344945   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:31.345000   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.348636   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:31.348708   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:31.376592   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.376614   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:31.376623   59960 cri.go:89] found id: ""
	I1126 20:12:31.376630   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:31.376683   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.380757   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.384468   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:31.384545   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:31.415544   59960 cri.go:89] found id: ""
	I1126 20:12:31.415570   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.415579   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:31.415586   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:31.415646   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:31.441604   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:31.441680   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:31.441699   59960 cri.go:89] found id: ""
	I1126 20:12:31.441723   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:31.441808   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.445590   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.449159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:31.449233   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:31.475467   59960 cri.go:89] found id: ""
	I1126 20:12:31.475492   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.475501   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:31.475507   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:31.475567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:31.505974   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:31.505995   59960 cri.go:89] found id: ""
	I1126 20:12:31.506004   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:31.506068   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.510913   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:31.510988   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:31.555870   59960 cri.go:89] found id: ""
	I1126 20:12:31.555901   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.555911   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:31.555920   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:31.555932   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:31.569317   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:31.569396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:31.639071   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:31.630335    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.631132    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.632992    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.633425    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.635012    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:31.630335    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.631132    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.632992    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.633425    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.635012    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:31.639141   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:31.639171   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:31.685122   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:31.685156   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:31.715735   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:31.715763   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:31.744469   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:31.744499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.782788   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:31.782822   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:31.854784   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:31.854820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:31.883960   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:31.883989   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:31.968197   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:31.968235   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:32.000618   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:32.000646   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:34.599812   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:34.610580   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:34.610690   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:34.643812   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:34.643835   59960 cri.go:89] found id: ""
	I1126 20:12:34.643844   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:34.643902   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.647819   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:34.647891   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:34.681825   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:34.681849   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:34.681855   59960 cri.go:89] found id: ""
	I1126 20:12:34.681863   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:34.681959   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.685589   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.689208   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:34.689280   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:34.719704   59960 cri.go:89] found id: ""
	I1126 20:12:34.719727   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.719736   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:34.719743   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:34.719802   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:34.745609   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:34.745632   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:34.745639   59960 cri.go:89] found id: ""
	I1126 20:12:34.745646   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:34.745704   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.749369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.752915   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:34.752982   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:34.778956   59960 cri.go:89] found id: ""
	I1126 20:12:34.778982   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.778996   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:34.779003   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:34.779059   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:34.805123   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:34.805146   59960 cri.go:89] found id: ""
	I1126 20:12:34.805153   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:34.805211   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.808760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:34.808834   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:34.834427   59960 cri.go:89] found id: ""
	I1126 20:12:34.834452   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.834462   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:34.834471   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:34.834482   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:34.912760   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:34.912792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:35.015751   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:35.015790   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:35.046216   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:35.046291   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:35.092725   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:35.092760   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:35.163096   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:35.163130   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:35.191405   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:35.191488   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:35.227181   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:35.227213   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:35.240889   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:35.240922   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:35.311849   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:35.302602    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.303934    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.304899    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.306705    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.307280    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:35.302602    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.303934    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.304899    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.306705    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.307280    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:35.311871   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:35.311884   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:35.356916   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:35.356951   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:37.883250   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:37.894052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:37.894122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:37.924918   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:37.924943   59960 cri.go:89] found id: ""
	I1126 20:12:37.924956   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:37.925020   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.928865   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:37.928940   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:37.961907   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:37.961958   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:37.961964   59960 cri.go:89] found id: ""
	I1126 20:12:37.961971   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:37.962035   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.965843   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.969339   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:37.969409   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:37.995343   59960 cri.go:89] found id: ""
	I1126 20:12:37.995373   59960 logs.go:282] 0 containers: []
	W1126 20:12:37.995381   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:37.995388   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:37.995491   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:38.022312   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:38.022334   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:38.022339   59960 cri.go:89] found id: ""
	I1126 20:12:38.022346   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:38.022413   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.026080   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.029533   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:38.029622   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:38.060280   59960 cri.go:89] found id: ""
	I1126 20:12:38.060307   59960 logs.go:282] 0 containers: []
	W1126 20:12:38.060346   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:38.060368   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:38.060437   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:38.091248   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:38.091312   59960 cri.go:89] found id: ""
	I1126 20:12:38.091327   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:38.091425   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.095836   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:38.095914   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:38.125378   59960 cri.go:89] found id: ""
	I1126 20:12:38.125403   59960 logs.go:282] 0 containers: []
	W1126 20:12:38.125413   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:38.125422   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:38.125436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:38.151847   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:38.151875   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:38.202356   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:38.202391   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:38.247650   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:38.247725   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:38.275709   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:38.275736   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:38.307514   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:38.307542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:38.404957   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:38.404994   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:38.491924   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:38.491962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:38.521423   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:38.521460   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:38.598021   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:38.598053   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:38.610973   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:38.611004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:38.687841   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:38.679705   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.680686   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.681793   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.682498   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.684162   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:38.679705   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.680686   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.681793   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.682498   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.684162   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:41.188401   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:41.199011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:41.199080   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:41.227170   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:41.227196   59960 cri.go:89] found id: ""
	I1126 20:12:41.227205   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:41.227260   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.230873   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:41.230945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:41.257484   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:41.257506   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:41.257522   59960 cri.go:89] found id: ""
	I1126 20:12:41.257529   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:41.257584   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.261286   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.265036   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:41.265101   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:41.290579   59960 cri.go:89] found id: ""
	I1126 20:12:41.290645   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.290669   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:41.290682   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:41.290741   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:41.319766   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:41.319786   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:41.319791   59960 cri.go:89] found id: ""
	I1126 20:12:41.319799   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:41.319859   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.323637   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.327077   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:41.327177   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:41.356676   59960 cri.go:89] found id: ""
	I1126 20:12:41.356702   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.356711   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:41.356719   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:41.356783   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:41.385771   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:41.385790   59960 cri.go:89] found id: ""
	I1126 20:12:41.385798   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:41.385852   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.389446   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:41.389544   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:41.416642   59960 cri.go:89] found id: ""
	I1126 20:12:41.416710   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.416732   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:41.416754   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:41.416788   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:41.482246   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:41.473419   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.474136   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.475824   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.476403   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.478152   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:41.473419   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.474136   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.475824   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.476403   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.478152   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:41.482311   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:41.482339   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:41.509950   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:41.510016   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:41.557291   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:41.557324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:41.584211   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:41.584240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:41.666177   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:41.666212   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:41.767334   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:41.767369   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:41.781064   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:41.781089   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:41.825285   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:41.825321   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:41.892538   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:41.892573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:41.920754   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:41.920785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:44.468280   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:44.479465   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:44.479546   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:44.507592   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:44.507615   59960 cri.go:89] found id: ""
	I1126 20:12:44.507623   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:44.507679   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.511422   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:44.511510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:44.543146   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:44.543169   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:44.543174   59960 cri.go:89] found id: ""
	I1126 20:12:44.543181   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:44.543251   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.547022   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.550639   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:44.550719   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:44.579025   59960 cri.go:89] found id: ""
	I1126 20:12:44.579054   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.579063   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:44.579070   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:44.579139   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:44.611309   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:44.611332   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:44.611336   59960 cri.go:89] found id: ""
	I1126 20:12:44.611344   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:44.611407   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.615332   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.619108   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:44.619183   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:44.645161   59960 cri.go:89] found id: ""
	I1126 20:12:44.645185   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.645194   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:44.645201   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:44.645257   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:44.684280   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:44.684301   59960 cri.go:89] found id: ""
	I1126 20:12:44.684310   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:44.684364   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.687985   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:44.688057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:44.713170   59960 cri.go:89] found id: ""
	I1126 20:12:44.713193   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.713202   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:44.713211   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:44.713225   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:44.790764   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:44.782647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.783505   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785179   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785579   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.787022   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:44.782647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.783505   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785179   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785579   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.787022   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:44.790787   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:44.790801   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:44.841911   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:44.842082   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:44.886124   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:44.886155   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:44.956783   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:44.956817   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:44.992805   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:44.992834   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:45.021163   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:45.021190   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:45.060873   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:45.061452   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:45.201027   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:45.201119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:45.266419   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:45.266547   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:45.415986   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:45.416024   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:47.928674   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:47.940771   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:47.940843   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:47.966175   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:47.966194   59960 cri.go:89] found id: ""
	I1126 20:12:47.966202   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:47.966254   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:47.969908   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:47.970011   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:47.997001   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:47.997027   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:47.997032   59960 cri.go:89] found id: ""
	I1126 20:12:47.997040   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:47.997096   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.001757   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.005881   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:48.005980   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:48.031565   59960 cri.go:89] found id: ""
	I1126 20:12:48.031587   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.031595   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:48.031602   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:48.031660   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:48.063357   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:48.063380   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:48.063386   59960 cri.go:89] found id: ""
	I1126 20:12:48.063393   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:48.063450   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.068044   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.073135   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:48.073260   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:48.103364   59960 cri.go:89] found id: ""
	I1126 20:12:48.103391   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.103401   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:48.103408   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:48.103511   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:48.134700   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:48.134720   59960 cri.go:89] found id: ""
	I1126 20:12:48.134728   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:48.134795   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.138489   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:48.138568   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:48.164615   59960 cri.go:89] found id: ""
	I1126 20:12:48.164639   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.164648   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:48.164657   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:48.164670   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:48.238206   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:48.238245   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:48.270325   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:48.270352   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:48.316632   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:48.316660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:48.328526   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:48.328554   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:48.370051   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:48.370081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:48.397236   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:48.397264   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:48.478994   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:48.479029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:48.586134   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:48.586167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:48.661172   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:48.650880   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.652436   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.653061   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.654717   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.655290   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:48.650880   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.652436   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.653061   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.654717   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.655290   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:48.661195   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:48.661211   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:48.689769   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:48.689797   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.235721   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:51.246961   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:51.247038   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:51.276386   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:51.276410   59960 cri.go:89] found id: ""
	I1126 20:12:51.276419   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:51.276472   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.280282   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:51.280363   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:51.307844   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:51.307875   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.307880   59960 cri.go:89] found id: ""
	I1126 20:12:51.307888   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:51.307944   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.311885   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.315516   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:51.315643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:51.343040   59960 cri.go:89] found id: ""
	I1126 20:12:51.343068   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.343077   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:51.343084   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:51.343144   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:51.371879   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:51.371901   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:51.371907   59960 cri.go:89] found id: ""
	I1126 20:12:51.371920   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:51.371976   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.375815   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.379444   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:51.379518   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:51.409590   59960 cri.go:89] found id: ""
	I1126 20:12:51.409615   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.409624   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:51.409630   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:51.409688   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:51.440665   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:51.440692   59960 cri.go:89] found id: ""
	I1126 20:12:51.440701   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:51.440756   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.444486   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:51.444565   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:51.470661   59960 cri.go:89] found id: ""
	I1126 20:12:51.470686   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.470695   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:51.470705   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:51.470749   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:51.482794   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:51.482823   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:51.570460   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:51.561457   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.562296   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.563970   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.564288   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.566409   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:51.561457   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.562296   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.563970   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.564288   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.566409   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:51.570484   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:51.570498   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:51.596696   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:51.596724   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:51.657780   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:51.657820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:51.736300   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:51.736338   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:51.772635   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:51.772664   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:51.808014   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:51.808042   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:51.909775   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:51.909814   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.955849   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:51.955887   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:51.986011   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:51.986040   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:54.569991   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:54.582000   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:54.582074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:54.610486   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:54.610506   59960 cri.go:89] found id: ""
	I1126 20:12:54.610515   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:54.610573   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.614711   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:54.614787   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:54.641548   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:54.641571   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:54.641577   59960 cri.go:89] found id: ""
	I1126 20:12:54.641584   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:54.641645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.645430   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.649375   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:54.649465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:54.677350   59960 cri.go:89] found id: ""
	I1126 20:12:54.677377   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.677386   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:54.677399   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:54.677456   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:54.706226   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:54.706249   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:54.706254   59960 cri.go:89] found id: ""
	I1126 20:12:54.706261   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:54.706315   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.710188   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.713666   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:54.713759   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:54.745132   59960 cri.go:89] found id: ""
	I1126 20:12:54.745158   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.745167   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:54.745174   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:54.745235   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:54.774016   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:54.774039   59960 cri.go:89] found id: ""
	I1126 20:12:54.774047   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:54.774105   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.778220   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:54.778293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:54.807768   59960 cri.go:89] found id: ""
	I1126 20:12:54.807831   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.807845   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:54.807855   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:54.807867   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:54.904620   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:54.904657   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:54.931520   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:54.931548   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:54.974322   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:54.974360   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:55.010146   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:55.010176   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:55.044963   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:55.045006   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:55.060490   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:55.060520   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:55.132694   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:55.124286   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.124937   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.126610   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.127207   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.128929   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:55.124286   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.124937   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.126610   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.127207   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.128929   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:55.132729   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:55.132746   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:55.180103   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:55.180139   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:55.258117   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:55.258154   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:55.289687   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:55.289716   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:57.870076   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:57.881883   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:57.881978   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:57.911809   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:57.911833   59960 cri.go:89] found id: ""
	I1126 20:12:57.911841   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:57.911899   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.915590   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:57.915685   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:57.943647   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:57.943671   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:57.943677   59960 cri.go:89] found id: ""
	I1126 20:12:57.943684   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:57.943747   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.947699   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.951409   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:57.951489   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:57.979114   59960 cri.go:89] found id: ""
	I1126 20:12:57.979138   59960 logs.go:282] 0 containers: []
	W1126 20:12:57.979147   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:57.979154   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:57.979214   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:58.009760   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:58.009781   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:58.009787   59960 cri.go:89] found id: ""
	I1126 20:12:58.009794   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:58.009855   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.013598   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.017135   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:58.017207   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:58.047222   59960 cri.go:89] found id: ""
	I1126 20:12:58.047247   59960 logs.go:282] 0 containers: []
	W1126 20:12:58.047255   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:58.047262   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:58.047324   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:58.094431   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:58.094510   59960 cri.go:89] found id: ""
	I1126 20:12:58.094524   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:58.094586   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.099004   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:58.099099   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:58.126698   59960 cri.go:89] found id: ""
	I1126 20:12:58.126727   59960 logs.go:282] 0 containers: []
	W1126 20:12:58.126735   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:58.126744   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:58.126756   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:58.155602   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:58.155629   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:58.196131   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:58.196166   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:58.243760   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:58.243793   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:58.314546   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:58.314583   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:58.347422   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:58.347451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:58.373247   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:58.373277   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:58.448488   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:58.448524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:58.480586   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:58.480615   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:58.586743   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:58.586799   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:58.600003   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:58.600029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:58.682648   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:58.673481   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.674315   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.675021   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.676838   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.677737   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:58.673481   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.674315   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.675021   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.676838   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.677737   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:01.183502   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:01.195046   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:01.195153   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:01.224257   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:01.224281   59960 cri.go:89] found id: ""
	I1126 20:13:01.224289   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:01.224365   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.228134   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:01.228206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:01.265990   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:01.266014   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:01.266019   59960 cri.go:89] found id: ""
	I1126 20:13:01.266027   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:01.266084   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.270682   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.274505   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:01.274580   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:01.302962   59960 cri.go:89] found id: ""
	I1126 20:13:01.302989   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.302998   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:01.303005   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:01.303072   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:01.335599   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:01.335621   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:01.335627   59960 cri.go:89] found id: ""
	I1126 20:13:01.335635   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:01.335689   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.339621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.343531   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:01.343614   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:01.369553   59960 cri.go:89] found id: ""
	I1126 20:13:01.369578   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.369588   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:01.369594   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:01.369657   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:01.402170   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:01.402197   59960 cri.go:89] found id: ""
	I1126 20:13:01.402205   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:01.402266   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.406260   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:01.406336   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:01.432250   59960 cri.go:89] found id: ""
	I1126 20:13:01.432326   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.432352   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:01.432362   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:01.432378   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:01.473457   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:01.473491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:01.525391   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:01.525445   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:01.557734   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:01.557765   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:01.650427   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:01.650465   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:01.696040   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:01.696070   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:01.801258   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:01.801297   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:01.872498   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:01.872534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:01.912672   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:01.912725   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:01.927976   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:01.928008   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:02.002577   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:01.992139   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.993221   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.994589   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996153   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996915   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:01.992139   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.993221   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.994589   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996153   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996915   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:02.002601   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:02.002614   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:04.532051   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:04.544501   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:04.544572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:04.571414   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:04.571435   59960 cri.go:89] found id: ""
	I1126 20:13:04.571443   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:04.571494   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.575072   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:04.575149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:04.603292   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:04.603312   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:04.603316   59960 cri.go:89] found id: ""
	I1126 20:13:04.603326   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:04.603378   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.607479   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.610889   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:04.610970   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:04.636626   59960 cri.go:89] found id: ""
	I1126 20:13:04.636652   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.636662   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:04.636668   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:04.636745   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:04.665487   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:04.665511   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:04.665516   59960 cri.go:89] found id: ""
	I1126 20:13:04.665523   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:04.665599   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.669516   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.673155   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:04.673221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:04.705848   59960 cri.go:89] found id: ""
	I1126 20:13:04.705873   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.705882   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:04.705888   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:04.705971   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:04.741254   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:04.741277   59960 cri.go:89] found id: ""
	I1126 20:13:04.741285   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:04.741340   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.745396   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:04.745469   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:04.777680   59960 cri.go:89] found id: ""
	I1126 20:13:04.777713   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.777723   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:04.777732   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:04.777744   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:04.884972   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:04.885008   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:04.898040   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:04.898066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:04.971530   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:04.971610   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:05.003493   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:05.003573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:05.082481   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:05.082515   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:05.116089   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:05.116119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:05.186979   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:05.178888   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.179664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181297   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.183205   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:05.178888   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.179664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181297   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.183205   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:05.187006   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:05.187020   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:05.214669   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:05.214698   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:05.261207   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:05.261238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:05.306449   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:05.306482   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:07.838042   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:07.850498   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:07.850567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:07.878108   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:07.878130   59960 cri.go:89] found id: ""
	I1126 20:13:07.878138   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:07.878197   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.882580   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:07.882654   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:07.911855   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:07.911886   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:07.911891   59960 cri.go:89] found id: ""
	I1126 20:13:07.911899   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:07.911960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.915705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.919300   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:07.919371   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:07.951018   59960 cri.go:89] found id: ""
	I1126 20:13:07.951044   59960 logs.go:282] 0 containers: []
	W1126 20:13:07.951053   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:07.951059   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:07.951119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:07.978929   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:07.978951   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:07.978956   59960 cri.go:89] found id: ""
	I1126 20:13:07.978963   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:07.979017   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.983189   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.986830   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:07.986903   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:08.016199   59960 cri.go:89] found id: ""
	I1126 20:13:08.016231   59960 logs.go:282] 0 containers: []
	W1126 20:13:08.016240   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:08.016251   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:08.016325   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:08.053456   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:08.053528   59960 cri.go:89] found id: ""
	I1126 20:13:08.053549   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:08.053644   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:08.057986   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:08.058066   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:08.087479   59960 cri.go:89] found id: ""
	I1126 20:13:08.087508   59960 logs.go:282] 0 containers: []
	W1126 20:13:08.087517   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:08.087533   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:08.087546   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:08.132468   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:08.132502   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:08.176740   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:08.176778   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:08.250131   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:08.250178   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:08.280307   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:08.280337   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:08.310477   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:08.310506   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:08.413610   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:08.413648   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:08.484512   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:08.474848   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.476074   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.477530   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.478182   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.479748   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:08.474848   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.476074   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.477530   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.478182   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.479748   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:08.484538   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:08.484551   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:08.561138   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:08.561172   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:08.596362   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:08.596439   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:08.609838   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:08.609909   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.136633   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:11.147922   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:11.148007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:11.179880   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.179915   59960 cri.go:89] found id: ""
	I1126 20:13:11.179923   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:11.180040   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.184887   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:11.184958   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:11.213848   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:11.213872   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:11.213878   59960 cri.go:89] found id: ""
	I1126 20:13:11.213885   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:11.213981   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.217804   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.221572   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:11.221649   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:11.258706   59960 cri.go:89] found id: ""
	I1126 20:13:11.258783   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.258799   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:11.258806   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:11.258880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:11.289663   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:11.289686   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:11.289692   59960 cri.go:89] found id: ""
	I1126 20:13:11.289699   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:11.289755   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.293522   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.298425   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:11.298504   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:11.325442   59960 cri.go:89] found id: ""
	I1126 20:13:11.325508   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.325534   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:11.325552   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:11.325636   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:11.352745   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:11.352808   59960 cri.go:89] found id: ""
	I1126 20:13:11.352834   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:11.352923   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.356710   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:11.356824   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:11.384378   59960 cri.go:89] found id: ""
	I1126 20:13:11.384402   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.384412   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:11.384421   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:11.384433   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:11.396869   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:11.396938   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:11.467278   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:11.459180   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.459948   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.461472   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.462000   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.463589   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:11.459180   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.459948   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.461472   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.462000   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.463589   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:11.467302   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:11.467316   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.494598   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:11.494626   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:11.533337   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:11.533372   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:11.559364   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:11.559392   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:11.642834   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:11.642873   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:11.680367   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:11.680393   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:11.784039   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:11.784075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:11.834225   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:11.834260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:11.905094   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:11.905129   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.439226   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:14.451155   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:14.451245   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:14.493752   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:14.493776   59960 cri.go:89] found id: ""
	I1126 20:13:14.493784   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:14.493840   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.497504   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:14.497627   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:14.524624   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:14.524646   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:14.524652   59960 cri.go:89] found id: ""
	I1126 20:13:14.524659   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:14.524743   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.528418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.532417   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:14.532512   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:14.559402   59960 cri.go:89] found id: ""
	I1126 20:13:14.559477   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.559491   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:14.559498   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:14.559556   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:14.588825   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:14.588848   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.588853   59960 cri.go:89] found id: ""
	I1126 20:13:14.588860   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:14.588921   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.593022   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.596763   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:14.596831   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:14.624835   59960 cri.go:89] found id: ""
	I1126 20:13:14.624858   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.624867   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:14.624874   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:14.624929   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:14.650771   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:14.650846   59960 cri.go:89] found id: ""
	I1126 20:13:14.650872   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:14.650960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.656095   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:14.656219   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:14.682420   59960 cri.go:89] found id: ""
	I1126 20:13:14.682493   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.682517   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:14.682540   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:14.682581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:14.722936   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:14.722971   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.754105   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:14.754134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:14.786128   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:14.786156   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:14.798341   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:14.798370   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:14.873270   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:14.865757   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.866349   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.867866   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.868348   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.869793   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:14.865757   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.866349   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.867866   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.868348   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.869793   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:14.873292   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:14.873306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:14.920206   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:14.920240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:14.996591   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:14.996624   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:15.024423   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:15.024451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:15.105848   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:15.105881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:15.205091   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:15.205170   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:17.734682   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:17.745326   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:17.745391   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:17.773503   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:17.773525   59960 cri.go:89] found id: ""
	I1126 20:13:17.773534   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:17.773621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.777326   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:17.777400   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:17.805117   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:17.805139   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:17.805144   59960 cri.go:89] found id: ""
	I1126 20:13:17.805151   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:17.805206   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.809065   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.812530   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:17.812601   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:17.841430   59960 cri.go:89] found id: ""
	I1126 20:13:17.841456   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.841465   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:17.841472   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:17.841530   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:17.868985   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:17.869009   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:17.869014   59960 cri.go:89] found id: ""
	I1126 20:13:17.869024   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:17.869081   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.882183   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.885701   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:17.885794   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:17.918849   59960 cri.go:89] found id: ""
	I1126 20:13:17.918872   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.918880   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:17.918887   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:17.918947   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:17.949773   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:17.949849   59960 cri.go:89] found id: ""
	I1126 20:13:17.949872   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:17.949996   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.953636   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:17.953705   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:17.980243   59960 cri.go:89] found id: ""
	I1126 20:13:17.980266   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.980275   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:17.980284   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:17.980295   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:18.011301   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:18.011331   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:18.038493   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:18.038526   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:18.080613   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:18.080641   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:18.160950   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:18.160988   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:18.262170   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:18.262215   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:18.275569   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:18.275593   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:18.351781   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:18.343534   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.344057   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.345769   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.346381   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.347931   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:18.343534   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.344057   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.345769   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.346381   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.347931   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:18.351805   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:18.351817   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:18.389344   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:18.389375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:18.434916   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:18.434949   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:18.527668   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:18.527702   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.058771   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:21.073274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:21.073339   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:21.121326   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:21.121345   59960 cri.go:89] found id: ""
	I1126 20:13:21.121356   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:21.121415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.130434   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:21.130507   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:21.164100   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:21.164161   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:21.164191   59960 cri.go:89] found id: ""
	I1126 20:13:21.164212   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:21.164289   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.168566   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.173217   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:21.173328   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:21.201882   59960 cri.go:89] found id: ""
	I1126 20:13:21.202006   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.202036   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:21.202055   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:21.202157   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:21.230033   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:21.230099   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:21.230120   59960 cri.go:89] found id: ""
	I1126 20:13:21.230144   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:21.230222   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.234188   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.238625   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:21.238709   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:21.266450   59960 cri.go:89] found id: ""
	I1126 20:13:21.266476   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.266485   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:21.266492   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:21.266567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:21.293192   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.293221   59960 cri.go:89] found id: ""
	I1126 20:13:21.293229   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:21.293320   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.297074   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:21.297146   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:21.325608   59960 cri.go:89] found id: ""
	I1126 20:13:21.325635   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.325644   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:21.325653   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:21.325665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:21.365168   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:21.365201   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:21.407809   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:21.407841   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:21.490502   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:21.490538   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:21.593562   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:21.593598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:21.620251   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:21.620280   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:21.696224   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:21.696260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:21.724295   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:21.724324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.754121   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:21.754146   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:21.785320   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:21.785347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:21.797528   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:21.797556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:21.871066   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:21.862248   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.863127   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.864832   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.865449   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.867089   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:21.862248   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.863127   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.864832   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.865449   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.867089   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:24.371542   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:24.382011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:24.382074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:24.413323   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:24.413351   59960 cri.go:89] found id: ""
	I1126 20:13:24.413360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:24.413418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.417248   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:24.417327   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:24.443549   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:24.443571   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:24.443576   59960 cri.go:89] found id: ""
	I1126 20:13:24.443583   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:24.443638   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.447448   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.450865   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:24.450933   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:24.481019   59960 cri.go:89] found id: ""
	I1126 20:13:24.481043   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.481052   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:24.481059   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:24.481119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:24.509327   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:24.509349   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:24.509354   59960 cri.go:89] found id: ""
	I1126 20:13:24.509361   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:24.509416   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.512867   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.516116   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:24.516181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:24.546284   59960 cri.go:89] found id: ""
	I1126 20:13:24.546361   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.546390   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:24.546405   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:24.546464   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:24.571968   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:24.572032   59960 cri.go:89] found id: ""
	I1126 20:13:24.572047   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:24.572113   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.575760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:24.575830   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:24.603299   59960 cri.go:89] found id: ""
	I1126 20:13:24.603325   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.603334   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:24.603373   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:24.603390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:24.642562   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:24.642595   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:24.696607   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:24.696640   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:24.724494   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:24.724523   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:24.805443   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:24.805477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:24.880673   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:24.872137   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.872936   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.874737   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.875329   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.876994   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:24.872137   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.872936   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.874737   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.875329   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.876994   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:24.880694   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:24.880708   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:24.912019   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:24.912047   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:24.998475   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:24.998511   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:25.027058   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:25.027084   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:25.060548   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:25.060577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:25.167756   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:25.167795   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:27.682279   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:27.693116   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:27.693189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:27.720687   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:27.720706   59960 cri.go:89] found id: ""
	I1126 20:13:27.720713   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:27.720765   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.724317   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:27.724388   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:27.751345   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:27.751369   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:27.751375   59960 cri.go:89] found id: ""
	I1126 20:13:27.751384   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:27.751445   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.755313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.758668   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:27.758738   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:27.788496   59960 cri.go:89] found id: ""
	I1126 20:13:27.788567   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.788592   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:27.788611   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:27.788703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:27.815714   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:27.815743   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:27.815749   59960 cri.go:89] found id: ""
	I1126 20:13:27.815757   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:27.815831   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.819360   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.822959   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:27.823038   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:27.853270   59960 cri.go:89] found id: ""
	I1126 20:13:27.853316   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.853326   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:27.853333   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:27.853403   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:27.880677   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:27.880701   59960 cri.go:89] found id: ""
	I1126 20:13:27.880710   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:27.880766   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.884425   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:27.884499   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:27.917060   59960 cri.go:89] found id: ""
	I1126 20:13:27.917126   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.917150   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:27.917183   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:27.917213   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:27.929246   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:27.929321   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:28.005492   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:27.995998   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.996970   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.999116   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.000043   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.001867   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:27.995998   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.996970   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.999116   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.000043   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.001867   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:28.005554   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:28.005581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:28.032388   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:28.032414   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:28.090244   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:28.090279   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:28.140049   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:28.140081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:28.217015   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:28.217052   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:28.252634   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:28.252663   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:28.356298   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:28.356347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:28.391198   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:28.391227   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:28.470669   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:28.470706   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:31.018712   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:31.029520   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:31.029594   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:31.067229   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:31.067249   59960 cri.go:89] found id: ""
	I1126 20:13:31.067257   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:31.067315   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.071728   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:31.071796   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:31.100937   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:31.101015   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:31.101024   59960 cri.go:89] found id: ""
	I1126 20:13:31.101032   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:31.101092   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.106006   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.109883   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:31.110020   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:31.140073   59960 cri.go:89] found id: ""
	I1126 20:13:31.140098   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.140107   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:31.140114   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:31.140177   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:31.170126   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:31.170150   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:31.170155   59960 cri.go:89] found id: ""
	I1126 20:13:31.170163   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:31.170220   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.175522   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.180015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:31.180137   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:31.216744   59960 cri.go:89] found id: ""
	I1126 20:13:31.216771   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.216781   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:31.216787   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:31.216847   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:31.244620   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:31.244653   59960 cri.go:89] found id: ""
	I1126 20:13:31.244661   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:31.244727   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.248677   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:31.248770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:31.275812   59960 cri.go:89] found id: ""
	I1126 20:13:31.275890   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.275914   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:31.275936   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:31.275972   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:31.308954   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:31.308981   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:31.404058   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:31.404140   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:31.449144   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:31.449177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:31.526538   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:31.526575   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:31.613358   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:31.613393   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:31.626272   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:31.626300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:31.701051   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:31.692350   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.693035   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.694572   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.695120   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.696599   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:31.692350   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.693035   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.694572   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.695120   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.696599   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:31.701076   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:31.701089   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:31.726047   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:31.726075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:31.770205   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:31.770246   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:31.800872   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:31.800898   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.331337   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:34.343013   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:34.343079   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:34.369127   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:34.369186   59960 cri.go:89] found id: ""
	I1126 20:13:34.369220   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:34.369305   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.372919   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:34.372984   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:34.400785   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:34.400806   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:34.400811   59960 cri.go:89] found id: ""
	I1126 20:13:34.400818   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:34.400871   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.404967   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.408568   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:34.408648   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:34.434956   59960 cri.go:89] found id: ""
	I1126 20:13:34.434981   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.434990   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:34.434996   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:34.435051   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:34.472918   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:34.472943   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:34.472948   59960 cri.go:89] found id: ""
	I1126 20:13:34.472956   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:34.473009   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.476556   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.480021   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:34.480097   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:34.506491   59960 cri.go:89] found id: ""
	I1126 20:13:34.506513   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.506522   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:34.506528   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:34.506587   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:34.534595   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.534618   59960 cri.go:89] found id: ""
	I1126 20:13:34.534627   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:34.534681   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.542373   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:34.542487   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:34.569404   59960 cri.go:89] found id: ""
	I1126 20:13:34.569439   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.569449   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:34.569473   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:34.569491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:34.594901   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:34.594926   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:34.661252   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:34.661357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:34.736470   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:34.736504   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.767635   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:34.767659   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:34.849541   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:34.849578   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:34.890089   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:34.890122   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:34.918362   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:34.918390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:34.955774   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:34.955800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:35.056965   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:35.057001   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:35.078639   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:35.078668   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:35.151655   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:35.143337   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.143918   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.145438   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.146046   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.147630   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:35.143337   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.143918   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.145438   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.146046   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.147630   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:37.653306   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:37.665236   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:37.665306   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:37.692381   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:37.692404   59960 cri.go:89] found id: ""
	I1126 20:13:37.692420   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:37.692475   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.696411   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:37.696485   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:37.733416   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:37.733447   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:37.733452   59960 cri.go:89] found id: ""
	I1126 20:13:37.733459   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:37.733512   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.737487   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.740759   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:37.740827   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:37.770540   59960 cri.go:89] found id: ""
	I1126 20:13:37.770563   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.770571   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:37.770578   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:37.770645   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:37.798542   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:37.798566   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:37.798572   59960 cri.go:89] found id: ""
	I1126 20:13:37.798579   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:37.798632   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.802507   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.806007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:37.806128   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:37.831752   59960 cri.go:89] found id: ""
	I1126 20:13:37.831780   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.831789   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:37.831796   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:37.831911   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:37.859491   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:37.859516   59960 cri.go:89] found id: ""
	I1126 20:13:37.859526   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:37.859608   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.863305   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:37.863407   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:37.890262   59960 cri.go:89] found id: ""
	I1126 20:13:37.890324   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.890347   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:37.890370   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:37.890389   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:37.915303   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:37.915334   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:38.015981   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:38.016018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:38.028479   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:38.028518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:38.117235   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:38.107607   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.108494   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.110529   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.111224   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.112955   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:38.107607   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.108494   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.110529   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.111224   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.112955   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:38.117268   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:38.117293   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:38.146073   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:38.146106   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:38.223055   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:38.223091   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:38.256738   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:38.256769   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:38.284204   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:38.284234   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:38.322205   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:38.322237   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:38.365768   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:38.365800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:40.946037   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:40.957084   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:40.957219   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:40.988160   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:40.988223   59960 cri.go:89] found id: ""
	I1126 20:13:40.988247   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:40.988330   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:40.991862   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:40.991975   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:41.021645   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:41.021671   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:41.021676   59960 cri.go:89] found id: ""
	I1126 20:13:41.021683   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:41.021776   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.025458   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.028751   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:41.028818   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:41.055272   59960 cri.go:89] found id: ""
	I1126 20:13:41.055297   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.055306   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:41.055313   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:41.055373   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:41.083272   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:41.083293   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:41.083298   59960 cri.go:89] found id: ""
	I1126 20:13:41.083306   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:41.083361   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.089116   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.092770   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:41.092882   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:41.119939   59960 cri.go:89] found id: ""
	I1126 20:13:41.119969   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.119978   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:41.119985   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:41.120085   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:41.149635   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:41.149657   59960 cri.go:89] found id: ""
	I1126 20:13:41.149666   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:41.149719   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.153346   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:41.153420   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:41.180294   59960 cri.go:89] found id: ""
	I1126 20:13:41.180320   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.180329   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:41.180338   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:41.180350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:41.207608   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:41.207638   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:41.250184   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:41.250217   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:41.280787   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:41.280815   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:41.350595   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:41.339246   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.340025   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.341777   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.342622   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.345147   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:41.339246   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.340025   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.341777   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.342622   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.345147   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:41.350618   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:41.350631   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:41.395571   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:41.395607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:41.471537   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:41.471576   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:41.503158   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:41.503187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:41.581612   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:41.581647   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:41.616210   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:41.616238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:41.712278   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:41.712311   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:44.224835   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:44.235354   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:44.235427   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:44.262020   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:44.262040   59960 cri.go:89] found id: ""
	I1126 20:13:44.262047   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:44.262100   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.266500   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:44.266621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:44.293469   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:44.293492   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:44.293498   59960 cri.go:89] found id: ""
	I1126 20:13:44.293515   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:44.293592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.297513   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.301293   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:44.301379   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:44.331229   59960 cri.go:89] found id: ""
	I1126 20:13:44.331252   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.331260   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:44.331266   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:44.331326   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:44.358510   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:44.358529   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:44.358534   59960 cri.go:89] found id: ""
	I1126 20:13:44.358540   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:44.358597   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.362369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.365719   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:44.365788   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:44.401237   59960 cri.go:89] found id: ""
	I1126 20:13:44.401303   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.401326   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:44.401348   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:44.401437   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:44.428506   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:44.428524   59960 cri.go:89] found id: ""
	I1126 20:13:44.428537   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:44.428592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.432302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:44.432379   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:44.461193   59960 cri.go:89] found id: ""
	I1126 20:13:44.461216   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.461225   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:44.461234   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:44.461245   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:44.472842   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:44.472911   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:44.552602   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:44.536833   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.537581   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.546763   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.547452   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.548655   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:44.536833   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.537581   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.546763   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.547452   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.548655   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:44.552629   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:44.552642   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:44.579143   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:44.579171   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:44.608447   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:44.608472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:44.634421   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:44.634447   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:44.669334   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:44.669362   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:44.770710   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:44.770785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:44.815986   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:44.816016   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:44.860293   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:44.860327   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:44.936110   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:44.936144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:47.514839   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:47.528244   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:47.528398   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:47.557240   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:47.557263   59960 cri.go:89] found id: ""
	I1126 20:13:47.557271   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:47.557328   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.561044   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:47.561146   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:47.586866   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:47.586888   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:47.586894   59960 cri.go:89] found id: ""
	I1126 20:13:47.586901   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:47.586956   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.591194   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.594829   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:47.594905   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:47.621081   59960 cri.go:89] found id: ""
	I1126 20:13:47.621104   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.621113   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:47.621120   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:47.621182   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:47.649583   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:47.649605   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:47.649610   59960 cri.go:89] found id: ""
	I1126 20:13:47.649618   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:47.649673   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.655090   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.659029   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:47.659096   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:47.685101   59960 cri.go:89] found id: ""
	I1126 20:13:47.685125   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.685134   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:47.685141   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:47.685198   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:47.712581   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:47.712603   59960 cri.go:89] found id: ""
	I1126 20:13:47.712612   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:47.712673   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.716384   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:47.716461   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:47.746287   59960 cri.go:89] found id: ""
	I1126 20:13:47.746321   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.746330   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:47.746357   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:47.746375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:47.776577   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:47.776607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:47.810845   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:47.810874   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:47.851317   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:47.851350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:47.897021   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:47.897054   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:47.925761   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:47.925792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:47.953836   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:47.953863   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:48.054533   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:48.054569   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:48.074474   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:48.074505   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:48.148938   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:48.137331   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.137950   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.139682   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.140242   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.143726   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:48.137331   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.137950   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.139682   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.140242   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.143726   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:48.148963   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:48.148977   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:48.231199   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:48.231234   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:50.823233   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:50.833805   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:50.833878   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:50.862309   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:50.862333   59960 cri.go:89] found id: ""
	I1126 20:13:50.862342   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:50.862396   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.865957   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:50.866034   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:50.892542   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:50.892565   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:50.892571   59960 cri.go:89] found id: ""
	I1126 20:13:50.892578   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:50.892632   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.896328   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.899831   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:50.899905   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:50.931031   59960 cri.go:89] found id: ""
	I1126 20:13:50.931098   59960 logs.go:282] 0 containers: []
	W1126 20:13:50.931112   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:50.931119   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:50.931176   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:50.958547   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:50.958580   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:50.958586   59960 cri.go:89] found id: ""
	I1126 20:13:50.958594   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:50.958649   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.962711   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.966380   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:50.966453   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:50.998188   59960 cri.go:89] found id: ""
	I1126 20:13:50.998483   59960 logs.go:282] 0 containers: []
	W1126 20:13:50.998498   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:50.998505   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:50.998592   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:51.031422   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:51.031447   59960 cri.go:89] found id: ""
	I1126 20:13:51.031462   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:51.031519   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:51.035715   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:51.035788   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:51.077429   59960 cri.go:89] found id: ""
	I1126 20:13:51.077452   59960 logs.go:282] 0 containers: []
	W1126 20:13:51.077460   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:51.077469   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:51.077481   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:51.105578   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:51.105609   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:51.188473   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:51.188518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:51.220853   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:51.220886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:51.304811   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:51.304848   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:51.337094   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:51.337162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:51.434145   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:51.434183   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:51.474781   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:51.474815   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:51.523360   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:51.523390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:51.556210   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:51.556238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:51.568960   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:51.568989   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:51.646125   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:51.637986   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.638634   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640319   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640884   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.642607   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:51.637986   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.638634   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640319   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640884   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.642607   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:54.147140   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:54.159570   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:54.159641   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:54.190129   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:54.190150   59960 cri.go:89] found id: ""
	I1126 20:13:54.190158   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:54.190221   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.193723   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:54.193795   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:54.221859   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:54.221881   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:54.221886   59960 cri.go:89] found id: ""
	I1126 20:13:54.221893   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:54.221986   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.225619   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.229615   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:54.229686   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:54.257427   59960 cri.go:89] found id: ""
	I1126 20:13:54.257454   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.257464   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:54.257470   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:54.257528   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:54.283499   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:54.283522   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:54.283528   59960 cri.go:89] found id: ""
	I1126 20:13:54.283535   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:54.283591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.287279   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.291072   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:54.291164   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:54.320377   59960 cri.go:89] found id: ""
	I1126 20:13:54.320409   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.320418   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:54.320424   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:54.320490   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:54.346357   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:54.346388   59960 cri.go:89] found id: ""
	I1126 20:13:54.346397   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:54.346453   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.350217   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:54.350337   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:54.387000   59960 cri.go:89] found id: ""
	I1126 20:13:54.387033   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.387042   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:54.387052   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:54.387064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:54.398981   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:54.399006   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:54.424733   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:54.424761   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:54.464124   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:54.464199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:54.516097   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:54.516149   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:54.597621   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:54.597656   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:54.626882   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:54.626916   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:54.706226   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:54.706262   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:54.777575   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:54.768229   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.769042   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.770705   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.771452   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.773075   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:54.768229   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.769042   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.770705   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.771452   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.773075   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:54.777599   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:54.777612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:54.808526   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:54.808556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:54.839385   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:54.839412   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:57.435357   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:57.446250   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:57.446321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:57.476511   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:57.476531   59960 cri.go:89] found id: ""
	I1126 20:13:57.476539   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:57.476595   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.480521   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:57.480599   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:57.508216   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:57.508239   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:57.508244   59960 cri.go:89] found id: ""
	I1126 20:13:57.508251   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:57.508312   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.512264   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.515930   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:57.516007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:57.546712   59960 cri.go:89] found id: ""
	I1126 20:13:57.546737   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.546746   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:57.546753   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:57.546811   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:57.575286   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:57.575308   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:57.575314   59960 cri.go:89] found id: ""
	I1126 20:13:57.575321   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:57.575403   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.579177   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.582844   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:57.582947   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:57.610240   59960 cri.go:89] found id: ""
	I1126 20:13:57.610268   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.610276   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:57.610282   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:57.610366   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:57.637690   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:57.637715   59960 cri.go:89] found id: ""
	I1126 20:13:57.637722   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:57.637804   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.641691   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:57.641816   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:57.673478   59960 cri.go:89] found id: ""
	I1126 20:13:57.673512   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.673521   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:57.673546   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:57.673565   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:57.724644   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:57.724677   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:57.801587   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:57.801622   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:57.846990   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:57.847020   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:57.948301   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:57.948336   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:57.960477   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:57.960510   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:58.036195   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:58.028003   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.028530   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030166   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030875   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.032666   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:58.028003   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.028530   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030166   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030875   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.032666   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:58.036262   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:58.036289   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:58.071247   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:58.071284   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:58.102552   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:58.102582   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:58.131358   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:58.131450   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:58.207844   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:58.207883   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:00.754664   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:00.765702   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:00.765771   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:00.806554   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:00.806579   59960 cri.go:89] found id: ""
	I1126 20:14:00.806587   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:00.806641   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.810501   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:00.810586   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:00.838112   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:00.838139   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:00.838144   59960 cri.go:89] found id: ""
	I1126 20:14:00.838152   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:00.838207   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.842001   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.845613   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:00.845684   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:00.874701   59960 cri.go:89] found id: ""
	I1126 20:14:00.874726   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.874735   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:00.874742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:00.874821   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:00.903003   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:00.903027   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:00.903032   59960 cri.go:89] found id: ""
	I1126 20:14:00.903039   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:00.903097   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.907398   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.911095   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:00.911169   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:00.937717   59960 cri.go:89] found id: ""
	I1126 20:14:00.937741   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.937750   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:00.937757   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:00.937815   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:00.964659   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:00.964683   59960 cri.go:89] found id: ""
	I1126 20:14:00.964692   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:00.964761   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.969052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:00.969128   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:00.996896   59960 cri.go:89] found id: ""
	I1126 20:14:00.996921   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.996930   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:00.996940   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:00.996968   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:01.052982   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:01.053013   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:01.164358   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:01.164396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:01.245847   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:01.237260   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.238200   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.239244   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.240970   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.241435   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:01.237260   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.238200   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.239244   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.240970   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.241435   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:01.245874   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:01.245888   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:01.278036   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:01.278066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:01.321761   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:01.321798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:01.349850   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:01.349877   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:01.362087   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:01.362115   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:01.406110   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:01.406143   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:01.488538   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:01.488580   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:01.524108   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:01.524314   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:04.107171   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:04.119134   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:04.119206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:04.150892   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:04.150913   59960 cri.go:89] found id: ""
	I1126 20:14:04.150920   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:04.150993   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.154614   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:04.154713   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:04.181842   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:04.181866   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:04.181870   59960 cri.go:89] found id: ""
	I1126 20:14:04.181878   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:04.181958   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.185706   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.189884   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:04.190033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:04.217117   59960 cri.go:89] found id: ""
	I1126 20:14:04.217143   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.217152   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:04.217159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:04.217218   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:04.244873   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:04.244893   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:04.244897   59960 cri.go:89] found id: ""
	I1126 20:14:04.244904   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:04.244962   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.248633   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.252113   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:04.252223   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:04.281381   59960 cri.go:89] found id: ""
	I1126 20:14:04.281410   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.281420   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:04.281426   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:04.281484   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:04.309793   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:04.309817   59960 cri.go:89] found id: ""
	I1126 20:14:04.309825   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:04.309881   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.313555   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:04.313625   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:04.341073   59960 cri.go:89] found id: ""
	I1126 20:14:04.341100   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.341109   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:04.341117   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:04.341129   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:04.436704   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:04.436741   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:04.511848   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:04.500099   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.500700   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506376   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506925   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.508357   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:04.500099   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.500700   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506376   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506925   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.508357   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:04.511872   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:04.511887   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:04.572587   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:04.572662   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:04.622150   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:04.622182   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:04.648129   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:04.648200   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:04.736436   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:04.736472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:04.748750   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:04.748783   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:04.784731   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:04.784756   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:04.861032   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:04.861067   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:04.888273   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:04.888306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:07.422077   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:07.432698   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:07.432776   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:07.463525   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:07.463545   59960 cri.go:89] found id: ""
	I1126 20:14:07.463553   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:07.463605   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.467175   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:07.467243   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:07.497801   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:07.497821   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:07.497826   59960 cri.go:89] found id: ""
	I1126 20:14:07.497833   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:07.497888   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.501759   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.505120   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:07.505198   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:07.539084   59960 cri.go:89] found id: ""
	I1126 20:14:07.539112   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.539121   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:07.539127   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:07.539189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:07.567688   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:07.567713   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:07.567720   59960 cri.go:89] found id: ""
	I1126 20:14:07.567727   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:07.567788   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.571445   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.575895   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:07.575973   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:07.603679   59960 cri.go:89] found id: ""
	I1126 20:14:07.603704   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.603713   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:07.603720   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:07.603801   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:07.633845   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:07.633869   59960 cri.go:89] found id: ""
	I1126 20:14:07.633877   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:07.633982   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.638439   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:07.638510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:07.669305   59960 cri.go:89] found id: ""
	I1126 20:14:07.669329   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.669338   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:07.669348   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:07.669361   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:07.746001   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:07.746039   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:07.773829   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:07.773859   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:07.806673   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:07.806705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:07.847992   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:07.848029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:07.876479   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:07.876507   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:07.952982   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:07.953018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:08.054195   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:08.054235   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:08.071790   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:08.071819   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:08.158168   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:08.148798   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.150262   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.151831   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.152401   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.154098   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:08.148798   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.150262   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.151831   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.152401   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.154098   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:08.158237   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:08.158266   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:08.185227   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:08.185257   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:10.730401   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:10.741460   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:10.741529   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:10.774241   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:10.774263   59960 cri.go:89] found id: ""
	I1126 20:14:10.774270   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:10.774327   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.778033   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:10.778103   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:10.806991   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:10.807015   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:10.807021   59960 cri.go:89] found id: ""
	I1126 20:14:10.807028   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:10.807083   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.810846   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.814441   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:10.814513   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:10.843200   59960 cri.go:89] found id: ""
	I1126 20:14:10.843226   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.843236   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:10.843242   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:10.843301   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:10.871039   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:10.871062   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:10.871068   59960 cri.go:89] found id: ""
	I1126 20:14:10.871075   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:10.871129   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.874747   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.878577   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:10.878661   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:10.907317   59960 cri.go:89] found id: ""
	I1126 20:14:10.907343   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.907352   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:10.907359   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:10.907414   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:10.936274   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:10.936297   59960 cri.go:89] found id: ""
	I1126 20:14:10.936306   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:10.936385   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.939976   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:10.940048   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:10.969776   59960 cri.go:89] found id: ""
	I1126 20:14:10.969848   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.969884   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:10.969911   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:10.969997   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:11.067923   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:11.067964   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:11.082749   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:11.082781   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:11.124244   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:11.124281   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:11.173196   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:11.173232   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:11.200233   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:11.200268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:11.284292   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:11.284327   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:11.317517   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:11.317545   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:11.395020   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:11.386165   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.387087   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388651   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388979   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.390832   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:11.386165   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.387087   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388651   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388979   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.390832   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:11.395043   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:11.395056   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:11.422025   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:11.422059   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:11.500554   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:11.500588   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.028990   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:14.043196   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:14.043275   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:14.078393   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:14.078418   59960 cri.go:89] found id: ""
	I1126 20:14:14.078426   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:14.078485   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.082581   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:14.082679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:14.113586   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:14.113611   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:14.113616   59960 cri.go:89] found id: ""
	I1126 20:14:14.113623   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:14.113677   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.117367   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.120847   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:14.120921   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:14.147191   59960 cri.go:89] found id: ""
	I1126 20:14:14.147214   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.147222   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:14.147229   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:14.147287   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:14.173461   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:14.173483   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.173489   59960 cri.go:89] found id: ""
	I1126 20:14:14.173496   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:14.173560   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.177359   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.180846   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:14.180926   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:14.211699   59960 cri.go:89] found id: ""
	I1126 20:14:14.211731   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.211740   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:14.211747   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:14.211815   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:14.245320   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:14.245343   59960 cri.go:89] found id: ""
	I1126 20:14:14.245352   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:14.245422   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.249066   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:14.249133   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:14.277385   59960 cri.go:89] found id: ""
	I1126 20:14:14.277407   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.277415   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:14.277424   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:14.277436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:14.289839   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:14.289866   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:14.361142   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:14.352896   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.353542   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355081   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355655   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.357173   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:14.352896   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.353542   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355081   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355655   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.357173   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:14.361165   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:14.361179   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:14.419666   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:14.419762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:14.468633   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:14.468667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:14.557664   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:14.557696   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:14.583538   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:14.583567   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.612806   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:14.612834   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:14.638272   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:14.638300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:14.721230   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:14.721268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:14.755109   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:14.755142   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:17.358125   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:17.371898   59960 out.go:203] 
	W1126 20:14:17.375212   59960 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1126 20:14:17.375248   59960 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1126 20:14:17.375258   59960 out.go:285] * Related issues:
	W1126 20:14:17.375279   59960 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1126 20:14:17.375299   59960 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1126 20:14:17.378409   59960 out.go:203] 
	
	
	==> CRI-O <==
	Nov 26 20:07:27 ha-278127 crio[667]: time="2025-11-26T20:07:27.974719211Z" level=info msg="Started container" PID=1450 containerID=0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee description=kube-system/kube-controller-manager-ha-278127/kube-controller-manager id=87dec93c-7b21-4bf6-943c-261f225c113f name=/runtime.v1.RuntimeService/StartContainer sandboxID=aaf24b4012ae22573565b29a9c87fa6c77cadf206a779d5e6c1de76d289f128f
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.929319714Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ec2c398f-23e5-463c-bbb1-09030f312307 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.930440903Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8fc66d00-8c37-4d25-84c6-7d7ac1c54ce3 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.932121756Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5c15308b-e98f-4109-8cbc-9192ac697f01 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.932226698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.940571173Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.940960238Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8f34edad928de60e13d64480bf036aa1cf6b11ecfb7c751ef02ef81267e506bc/merged/etc/passwd: no such file or directory"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.941066542Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f34edad928de60e13d64480bf036aa1cf6b11ecfb7c751ef02ef81267e506bc/merged/etc/group: no such file or directory"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.941381721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.959928416Z" level=info msg="Created container 1de9ee4cdf6523ba82be553073f7f95b567b3080cf0b35a8910ac6dcf51abbd5: kube-system/storage-provisioner/storage-provisioner" id=5c15308b-e98f-4109-8cbc-9192ac697f01 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.960936581Z" level=info msg="Starting container: 1de9ee4cdf6523ba82be553073f7f95b567b3080cf0b35a8910ac6dcf51abbd5" id=51eb399f-be44-48a0-a1b4-1c62267c418c name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.967526563Z" level=info msg="Started container" PID=1462 containerID=1de9ee4cdf6523ba82be553073f7f95b567b3080cf0b35a8910ac6dcf51abbd5 description=kube-system/storage-provisioner/storage-provisioner id=51eb399f-be44-48a0-a1b4-1c62267c418c name=/runtime.v1.RuntimeService/StartContainer sandboxID=21dd814126bdbbb8dab349806b778ddb306dc5100a35c1bd2fe40c8004bcd523
	Nov 26 20:07:44 ha-278127 conmon[1447]: conmon 0e221d151c3ca5256368 <ninfo>: container 1450 exited with status 1
	Nov 26 20:07:45 ha-278127 crio[667]: time="2025-11-26T20:07:45.240819859Z" level=info msg="Removing container: c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9" id=6f335103-7e48-492e-b33a-d6d488e111fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:07:45 ha-278127 crio[667]: time="2025-11-26T20:07:45.256615675Z" level=info msg="Error loading conmon cgroup of container c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9: cgroup deleted" id=6f335103-7e48-492e-b33a-d6d488e111fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:07:45 ha-278127 crio[667]: time="2025-11-26T20:07:45.261280075Z" level=info msg="Removed container c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9: kube-system/kube-controller-manager-ha-278127/kube-controller-manager" id=6f335103-7e48-492e-b33a-d6d488e111fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.929977452Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c9fc5566-53be-4e3a-ad5b-047dfe5df6f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.931894512Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c6b73409-e91d-4450-8804-870ca6e0b63d name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.933188155Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-278127/kube-controller-manager" id=b5b42e4a-b813-4466-87cd-d441eaaf849b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.933308096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.94134128Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.942037763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.965749324Z" level=info msg="Created container b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca: kube-system/kube-controller-manager-ha-278127/kube-controller-manager" id=b5b42e4a-b813-4466-87cd-d441eaaf849b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.966758303Z" level=info msg="Starting container: b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca" id=d8573d49-5a20-4657-b169-a7727449cf6d name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.975098568Z" level=info msg="Started container" PID=1498 containerID=b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca description=kube-system/kube-controller-manager-ha-278127/kube-controller-manager id=d8573d49-5a20-4657-b169-a7727449cf6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=aaf24b4012ae22573565b29a9c87fa6c77cadf206a779d5e6c1de76d289f128f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	b3d2b3bea3b9f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   6                   aaf24b4012ae2       kube-controller-manager-ha-278127   kube-system
	1de9ee4cdf652       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   8 minutes ago       Running             storage-provisioner       5                   21dd814126bdb       storage-provisioner                 kube-system
	0e221d151c3ca       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   5                   aaf24b4012ae2       kube-controller-manager-ha-278127   kube-system
	1a9b5dae15334       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   8 minutes ago       Exited              storage-provisioner       4                   21dd814126bdb       storage-provisioner                 kube-system
	1622dad7c067a       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   9 minutes ago       Running             kube-vip                  3                   d4cb99de55854       kube-vip-ha-278127                  kube-system
	822876229de0f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   9 minutes ago       Running             coredns                   2                   dfdbe4360041c       coredns-66bc5c9577-ndh8k            kube-system
	aef907239d286       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   9 minutes ago       Running             busybox                   2                   78d3fb27335b4       busybox-7b57f96db7-vwpd8            default
	787754735cfed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   9 minutes ago       Running             coredns                   2                   89e2c226e09e6       coredns-66bc5c9577-bbpk7            kube-system
	d140d1950675e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 minutes ago       Running             kindnet-cni               2                   b9a376ab09c3c       kindnet-gp24m                       kube-system
	7b45294efb449       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   9 minutes ago       Running             kube-proxy                2                   55fa9dab05c0d       kube-proxy-5fndw                    kube-system
	f5647f1652cc1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   9 minutes ago       Running             kube-apiserver            3                   c932fd4498a66       kube-apiserver-ha-278127            kube-system
	040a854900180       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   9 minutes ago       Running             kube-scheduler            2                   773a6356cec93       kube-scheduler-ha-278127            kube-system
	106da3c0ad4fa       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   9 minutes ago       Exited              kube-vip                  2                   d4cb99de55854       kube-vip-ha-278127                  kube-system
	cdc1651fea8f1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   9 minutes ago       Running             etcd                      2                   11d5891e684b3       etcd-ha-278127                      kube-system
	
	
	==> coredns [787754735cfed2e99ff1e0336a870da9b5e17eaed8d9d79b97dbfa75dd83059c] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45898 - 29384 "HINFO IN 3170256484025904488.3791759156995599050. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014293297s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [822876229de0f6cb25db3449774153712b72a0c129090a61a1aeadc760c6cad4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53615 - 2115 "HINFO IN 6991506871979899616.8642824612935885209. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017055518s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-278127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T19_58_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:58:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:15:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:15:34 +0000   Wed, 26 Nov 2025 19:58:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:15:34 +0000   Wed, 26 Nov 2025 19:58:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:15:34 +0000   Wed, 26 Nov 2025 19:58:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:15:34 +0000   Wed, 26 Nov 2025 19:59:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-278127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                370e19a1-8269-418f-82ce-e7791d2f9cc5
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vwpd8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-bbpk7             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 coredns-66bc5c9577-ndh8k             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-ha-278127                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-gp24m                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-278127             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-278127    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-5fndw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-278127             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-278127                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 9m9s                   kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     17m (x8 over 17m)      kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)      kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)      kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           17m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           16m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-278127 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           10m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   Starting                 9m20s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m19s (x8 over 9m20s)  kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m19s (x8 over 9m20s)  kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m19s (x8 over 9m20s)  kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m32s                  node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           50s                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	
	
	Name:               ha-278127-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_26T19_58_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:58:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:05:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-278127-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                77d88c20-b1f3-431d-ace6-24a69c640dde
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-72bpv                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-278127-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-x82cz                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-278127-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-278127-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-p4455                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-278127-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-278127-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   RegisteredNode           16m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-278127-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeNotReady             12m                node-controller  Node ha-278127-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-278127-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           7m33s              node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   NodeNotReady             6m43s              node-controller  Node ha-278127-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           51s                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	
	
	Name:               ha-278127-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_26T20_01_35_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:01:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:05:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-278127-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                4949defc-dfd6-4bc6-9c78-3cb968da2b3e
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hqq6q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kindnet-qbd6w               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-proxy-d4p99            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m (x3 over 14m)  kubelet          Node ha-278127-m04 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 14m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m (x3 over 14m)  kubelet          Node ha-278127-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x3 over 14m)  kubelet          Node ha-278127-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   NodeReady                13m                kubelet          Node ha-278127-m04 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-278127-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-278127-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-278127-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           7m33s              node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   NodeNotReady             6m43s              node-controller  Node ha-278127-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           51s                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	
	
	Name:               ha-278127-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_26T20_15_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:15:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127-m05
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:15:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:15:48 +0000   Wed, 26 Nov 2025 20:15:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:15:48 +0000   Wed, 26 Nov 2025 20:15:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:15:48 +0000   Wed, 26 Nov 2025 20:15:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:15:48 +0000   Wed, 26 Nov 2025 20:15:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-278127-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                d959912d-c0c4-4be3-93de-9124534b5461
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l9p24                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 etcd-ha-278127-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         45s
	  kube-system                 kindnet-lskzr                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      49s
	  kube-system                 kube-apiserver-ha-278127-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-ha-278127-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-8jv6l                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-scheduler-ha-278127-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-vip-ha-278127-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        44s   kube-proxy       
	  Normal  RegisteredNode  48s   node-controller  Node ha-278127-m05 event: Registered Node ha-278127-m05 in Controller
	  Normal  RegisteredNode  46s   node-controller  Node ha-278127-m05 event: Registered Node ha-278127-m05 in Controller
	
	
	==> dmesg <==
	[Nov26 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014220] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507172] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032749] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.773464] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.697672] kauditd_printk_skb: 36 callbacks suppressed
	[Nov26 19:37] overlayfs: idmapped layers are currently not supported
	[  +0.074077] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov26 19:39] hrtimer: interrupt took 16123050 ns
	[Nov26 19:43] overlayfs: idmapped layers are currently not supported
	[Nov26 19:44] overlayfs: idmapped layers are currently not supported
	[Nov26 19:58] overlayfs: idmapped layers are currently not supported
	[ +33.942210] overlayfs: idmapped layers are currently not supported
	[Nov26 19:59] overlayfs: idmapped layers are currently not supported
	[Nov26 20:01] overlayfs: idmapped layers are currently not supported
	[Nov26 20:02] overlayfs: idmapped layers are currently not supported
	[Nov26 20:04] overlayfs: idmapped layers are currently not supported
	[  +3.105496] overlayfs: idmapped layers are currently not supported
	[ +37.228314] overlayfs: idmapped layers are currently not supported
	[Nov26 20:05] overlayfs: idmapped layers are currently not supported
	[Nov26 20:06] overlayfs: idmapped layers are currently not supported
	[  +3.713866] overlayfs: idmapped layers are currently not supported
	[Nov26 20:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cdc1651fea8f10bd665928dcc7bb174b74385eb06e911da9629df17c0d9d29e8] <==
	{"level":"info","ts":"2025-11-26T20:14:53.462630Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:53.488797Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":6762496,"size":"6.8 MB"}
	{"level":"info","ts":"2025-11-26T20:14:53.507286Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b4f1ca082be894dc","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-26T20:14:53.507328Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:53.635810Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":4086,"remote-peer-id":"b4f1ca082be894dc","bytes":6771645,"size":"6.8 MB"}
	{"level":"info","ts":"2025-11-26T20:14:53.777261Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(9033535516480176766 12593026477526642892 13038424532659508444)"}
	{"level":"info","ts":"2025-11-26T20:14:53.777477Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:53.777538Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"b4f1ca082be894dc"}
	{"level":"warn","ts":"2025-11-26T20:14:53.794834Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:14:53.796380Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc","error":"EOF"}
	{"level":"info","ts":"2025-11-26T20:14:54.007036Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:54.049653Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b4f1ca082be894dc","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-11-26T20:14:54.049698Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:54.049710Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:54.077606Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"warn","ts":"2025-11-26T20:14:54.203311Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"b4f1ca082be894dc","error":"failed to write b4f1ca082be894dc on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:35908: write: broken pipe)"}
	{"level":"warn","ts":"2025-11-26T20:14:54.203400Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:54.223621Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b4f1ca082be894dc","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-26T20:14:54.223678Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:54.223691Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:15:02.767987Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-26T20:15:07.580298Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-26T20:15:23.636733Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"b4f1ca082be894dc","bytes":6771645,"size":"6.8 MB","took":"30.201610177s"}
	{"level":"warn","ts":"2025-11-26T20:15:51.913439Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.215465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:370350"}
	{"level":"info","ts":"2025-11-26T20:15:51.913501Z","caller":"traceutil/trace.go:172","msg":"trace[150333726] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:3739; }","duration":"164.293191ms","start":"2025-11-26T20:15:51.749195Z","end":"2025-11-26T20:15:51.913488Z","steps":["trace[150333726] 'range keys from bolt db'  (duration: 162.977764ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:15:52 up 58 min,  0 user,  load average: 1.38, 1.30, 1.29
	Linux ha-278127 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d140d1950675ee8ccd9c84ef7a5a7da1b1e44300cc3e3a958c71e1138816061f] <==
	I1126 20:15:22.226696       1 main.go:324] Node ha-278127-m05 has CIDR [10.244.2.0/24] 
	I1126 20:15:32.226250       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:15:32.226286       1 main.go:301] handling current node
	I1126 20:15:32.226302       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:15:32.226309       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:15:32.226460       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:15:32.226474       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:15:32.226527       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1126 20:15:32.226538       1 main.go:324] Node ha-278127-m05 has CIDR [10.244.2.0/24] 
	I1126 20:15:42.226514       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:15:42.227359       1 main.go:301] handling current node
	I1126 20:15:42.227406       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:15:42.227455       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:15:42.227674       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:15:42.227962       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:15:42.228102       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1126 20:15:42.228120       1 main.go:324] Node ha-278127-m05 has CIDR [10.244.2.0/24] 
	I1126 20:15:52.226325       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:15:52.226369       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:15:52.226548       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:15:52.226558       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:15:52.226639       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1126 20:15:52.226645       1 main.go:324] Node ha-278127-m05 has CIDR [10.244.2.0/24] 
	I1126 20:15:52.226719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:15:52.226727       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f5647f1652cc11a195a49a98906391e791c3136916a5e3c249907585088fad42] <==
	{"level":"warn","ts":"2025-11-26T20:08:15.185150Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019681e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185302Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400264b2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185460Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001969860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185569Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40023790e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185752Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a24960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185791Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002218000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.188111Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400089eb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.188335Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002471680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190353Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400264b2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190396Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f503c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190413Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029423c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190430Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001969860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190463Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a3b860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190481Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002378000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190499Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190513Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190529Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a24960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190727Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400089e000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	W1126 20:08:17.152713       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1126 20:08:17.154506       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:08:17.162706       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:08:19.148616       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:08:22.296241       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:09:09.201336       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:09:09.262823       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee] <==
	I1126 20:07:29.733675       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:07:30.451982       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1126 20:07:30.452014       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:07:30.453426       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1126 20:07:30.453688       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1126 20:07:30.453871       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1126 20:07:30.453945       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1126 20:07:44.473711       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca] <==
	E1126 20:08:59.054603       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054612       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054617       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054623       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	I1126 20:08:59.075009       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mttpp"
	I1126 20:08:59.108301       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mttpp"
	I1126 20:08:59.108397       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-278127-m03"
	I1126 20:08:59.137341       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-278127-m03"
	I1126 20:08:59.137379       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-cjs7r"
	I1126 20:08:59.170242       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-cjs7r"
	I1126 20:08:59.170364       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-278127-m03"
	I1126 20:08:59.200927       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-278127-m03"
	I1126 20:08:59.201053       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-278127-m03"
	I1126 20:08:59.231029       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-278127-m03"
	I1126 20:08:59.231129       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-278127-m03"
	I1126 20:08:59.266325       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-278127-m03"
	I1126 20:08:59.266427       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-278127-m03"
	I1126 20:08:59.307467       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-278127-m03"
	I1126 20:14:09.243470       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-hqq6q"
	I1126 20:14:19.320009       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-72bpv"
	I1126 20:15:03.175366       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-278127-m05\" does not exist"
	I1126 20:15:03.207382       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-278127-m05" podCIDRs=["10.244.2.0/24"]
	I1126 20:15:04.358981       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-278127-m05"
	I1126 20:15:04.359270       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1126 20:15:49.366706       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [7b45294efb44968b6b5d7d6994b3f6f118094d33ccfb9aa9a125e9d6110f41b3] <==
	I1126 20:07:27.549779       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	I1126 20:07:27.549805       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	I1126 20:07:27.549666       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	E1126 20:07:31.630334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:31.630336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:31.630470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:31.630581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:07:34.702391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:34.702403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:07:34.702509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:34.702664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:41.518262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:41.518267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:41.518397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:41.518465       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1126 20:07:41.518496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:07:52.462253       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1126 20:07:52.462312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:52.462400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:55.534388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:55.534401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:08:05.710253       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1126 20:08:08.782267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:08:11.854307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:08:14.930219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [040a8549001808f2d3fce3d4cf9f8dff272706173960c5e8004af8b1ea042e80] <==
	I1126 20:06:34.800738       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:06:39.572983       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:06:39.573028       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:06:39.573039       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:06:39.573046       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:06:39.693522       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:06:39.693624       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:06:39.703802       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:06:39.704071       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:06:39.715887       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:06:39.704092       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:06:39.816440       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1126 20:15:48.283319       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-l9p24\": pod busybox-7b57f96db7-l9p24 is already assigned to node \"ha-278127-m05\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-l9p24" node="ha-278127-m05"
	E1126 20:15:48.288301       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 1cbde006-b1ea-451e-ba5b-380c98a2782c(default/busybox-7b57f96db7-l9p24) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-l9p24"
	E1126 20:15:48.288437       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-l9p24\": pod busybox-7b57f96db7-l9p24 is already assigned to node \"ha-278127-m05\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-l9p24"
	I1126 20:15:48.290719       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-l9p24" node="ha-278127-m05"
	
	
	==> kubelet <==
	Nov 26 20:07:21 ha-278127 kubelet[805]: E1126 20:07:21.263300     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:23 ha-278127 kubelet[805]: E1126 20:07:23.240740     805 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-278127.187ba7448d330dec  default   2559 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-278127,UID:ha-278127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-278127 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-278127,},FirstTimestamp:2025-11-26 20:06:31 +0000 UTC,LastTimestamp:2025-11-26 20:06:32.032348366 +0000 UTC m=+0.308576049,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-278127,}"
	Nov 26 20:07:27 ha-278127 kubelet[805]: I1126 20:07:27.929241     805 scope.go:117] "RemoveContainer" containerID="c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9"
	Nov 26 20:07:28 ha-278127 kubelet[805]: I1126 20:07:28.928664     805 scope.go:117] "RemoveContainer" containerID="1a9b5dae1533404a7bf684e278d137906a4f310cb5682e61046be41540e6f32b"
	Nov 26 20:07:31 ha-278127 kubelet[805]: E1126 20:07:31.162433     805 controller.go:195] "Failed to update lease" err="Put \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:31 ha-278127 kubelet[805]: E1126 20:07:31.265440     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes ha-278127)"
	Nov 26 20:07:41 ha-278127 kubelet[805]: E1126 20:07:41.163428     805 controller.go:195] "Failed to update lease" err="Put \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:41 ha-278127 kubelet[805]: I1126 20:07:41.163974     805 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Nov 26 20:07:41 ha-278127 kubelet[805]: E1126 20:07:41.266735     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:41 ha-278127 kubelet[805]: E1126 20:07:41.266930     805 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count"
	Nov 26 20:07:45 ha-278127 kubelet[805]: I1126 20:07:45.237637     805 scope.go:117] "RemoveContainer" containerID="c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9"
	Nov 26 20:07:45 ha-278127 kubelet[805]: I1126 20:07:45.238084     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	Nov 26 20:07:45 ha-278127 kubelet[805]: E1126 20:07:45.238254     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-278127_kube-system(5eb8d26456c3b783869be39bb80c3519)\"" pod="kube-system/kube-controller-manager-ha-278127" podUID="5eb8d26456c3b783869be39bb80c3519"
	Nov 26 20:07:47 ha-278127 kubelet[805]: I1126 20:07:47.402612     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	Nov 26 20:07:47 ha-278127 kubelet[805]: E1126 20:07:47.402814     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-278127_kube-system(5eb8d26456c3b783869be39bb80c3519)\"" pod="kube-system/kube-controller-manager-ha-278127" podUID="5eb8d26456c3b783869be39bb80c3519"
	Nov 26 20:07:49 ha-278127 kubelet[805]: E1126 20:07:49.241093     805 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kindnet-gp24m)" podUID="4d3597e4-de22-4f29-8c58-1aaabd4a8a56" pod="kube-system/kindnet-gp24m"
	Nov 26 20:07:51 ha-278127 kubelet[805]: E1126 20:07:51.165080     805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms"
	Nov 26 20:07:57 ha-278127 kubelet[805]: E1126 20:07:57.243812     805 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-278127.187ba7448d32cbe5  default   2561 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-278127,UID:ha-278127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-278127 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-278127,},FirstTimestamp:2025-11-26 20:06:31 +0000 UTC,LastTimestamp:2025-11-26 20:06:32.033252015 +0000 UTC m=+0.309479698,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-278127,}"
	Nov 26 20:08:00 ha-278127 kubelet[805]: I1126 20:08:00.928844     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	Nov 26 20:08:00 ha-278127 kubelet[805]: E1126 20:08:00.929077     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-278127_kube-system(5eb8d26456c3b783869be39bb80c3519)\"" pod="kube-system/kube-controller-manager-ha-278127" podUID="5eb8d26456c3b783869be39bb80c3519"
	Nov 26 20:08:01 ha-278127 kubelet[805]: E1126 20:08:01.366584     805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	Nov 26 20:08:01 ha-278127 kubelet[805]: E1126 20:08:01.649883     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recurs
iveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"ha-278127\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-278127/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:08:11 ha-278127 kubelet[805]: E1126 20:08:11.650209     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:08:11 ha-278127 kubelet[805]: E1126 20:08:11.768381     805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": context deadline exceeded" interval="800ms"
	Nov 26 20:08:12 ha-278127 kubelet[805]: I1126 20:08:12.929036     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-278127 -n ha-278127
helpers_test.go:269: (dbg) Run:  kubectl --context ha-278127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-rcsd2
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-278127 describe pod busybox-7b57f96db7-rcsd2
helpers_test.go:290: (dbg) kubectl --context ha-278127 describe pod busybox-7b57f96db7-rcsd2:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-rcsd2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zn4mp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-zn4mp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  95s                default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  52s (x2 over 52s)  default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  51s                default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  50s                default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  6s                 default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  52s (x4 over 56s)  default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  51s                default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  50s (x2 over 51s)  default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  6s                 default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (85.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (6.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:309: expected profile "ha-278127" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-278127\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-278127\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-278127\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"N
ame\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-devi
ce-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":
false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-278127
helpers_test.go:243: (dbg) docker inspect ha-278127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd",
	        "Created": "2025-11-26T19:57:51.94382214Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 60086,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:06:25.13540784Z",
	            "FinishedAt": "2025-11-26T20:06:24.397214575Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/hosts",
	        "LogPath": "/var/lib/docker/containers/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd-json.log",
	        "Name": "/ha-278127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-278127:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-278127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd",
	                "LowerDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c12c2db9558baed8876313cf29ed50ad876225d492f5b6886eb14184b0d78501/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-278127",
	                "Source": "/var/lib/docker/volumes/ha-278127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-278127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-278127",
	                "name.minikube.sigs.k8s.io": "ha-278127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb3aaf333e9f66a1f0a54705c2952cf94a31e67f170d0e073ad505006b4613f7",
	            "SandboxKey": "/var/run/docker/netns/cb3aaf333e9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-278127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:6e:15:9f:21:8c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "20cb65a83ad57cf8581cf982a5b25f381be527698b87a783139e32a436f750e9",
	                    "EndpointID": "217fa13f4a876f9a733e9c88a45d94a8aabe2f981d6e4c092ca2c647767455d3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-278127",
	                        "0081e5a17ed5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-278127 -n ha-278127
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 logs -n 25: (2.327570305s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-278127 ssh -n ha-278127-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test_ha-278127-m03_ha-278127-m04.txt                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp testdata/cp-test.txt ha-278127-m04:/home/docker/cp-test.txt                                                             │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2837002730/001/cp-test_ha-278127-m04.txt │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127:/home/docker/cp-test_ha-278127-m04_ha-278127.txt                       │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127 sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127.txt                                                 │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127-m02:/home/docker/cp-test_ha-278127-m04_ha-278127-m02.txt               │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m02 sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127-m02.txt                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ cp      │ ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127-m03:/home/docker/cp-test_ha-278127-m04_ha-278127-m03.txt               │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ ssh     │ ha-278127 ssh -n ha-278127-m03 sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127-m03.txt                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ node    │ ha-278127 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:02 UTC │
	│ node    │ ha-278127 node start m02 --alsologtostderr -v 5                                                                                      │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:02 UTC │ 26 Nov 25 20:03 UTC │
	│ node    │ ha-278127 node list --alsologtostderr -v 5                                                                                           │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:03 UTC │                     │
	│ stop    │ ha-278127 stop --alsologtostderr -v 5                                                                                                │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:03 UTC │ 26 Nov 25 20:04 UTC │
	│ start   │ ha-278127 start --wait true --alsologtostderr -v 5                                                                                   │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:04 UTC │ 26 Nov 25 20:05 UTC │
	│ node    │ ha-278127 node list --alsologtostderr -v 5                                                                                           │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:05 UTC │                     │
	│ node    │ ha-278127 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:05 UTC │ 26 Nov 25 20:05 UTC │
	│ stop    │ ha-278127 stop --alsologtostderr -v 5                                                                                                │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:05 UTC │ 26 Nov 25 20:06 UTC │
	│ start   │ ha-278127 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:06 UTC │                     │
	│ node    │ ha-278127 node add --control-plane --alsologtostderr -v 5                                                                            │ ha-278127 │ jenkins │ v1.37.0 │ 26 Nov 25 20:14 UTC │ 26 Nov 25 20:15 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:06:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:06:24.854734   59960 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:06:24.854900   59960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:06:24.854911   59960 out.go:374] Setting ErrFile to fd 2...
	I1126 20:06:24.854917   59960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:06:24.855178   59960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:06:24.855529   59960 out.go:368] Setting JSON to false
	I1126 20:06:24.856339   59960 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2915,"bootTime":1764184670,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:06:24.856415   59960 start.go:143] virtualization:  
	I1126 20:06:24.859567   59960 out.go:179] * [ha-278127] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:06:24.863328   59960 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:06:24.863432   59960 notify.go:221] Checking for updates...
	I1126 20:06:24.869239   59960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:06:24.872146   59960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:24.874915   59960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:06:24.877742   59960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:06:24.880612   59960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:06:24.883943   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:24.884479   59960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:06:24.917824   59960 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:06:24.917967   59960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:06:24.982581   59960 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-26 20:06:24.973603153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:06:24.982686   59960 docker.go:319] overlay module found
	I1126 20:06:24.986072   59960 out.go:179] * Using the docker driver based on existing profile
	I1126 20:06:24.989065   59960 start.go:309] selected driver: docker
	I1126 20:06:24.989102   59960 start.go:927] validating driver "docker" against &{Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:24.989232   59960 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:06:24.989341   59960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:06:25.048426   59960 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-26 20:06:25.038525674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:06:25.048890   59960 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:06:25.048924   59960 cni.go:84] Creating CNI manager for ""
	I1126 20:06:25.048991   59960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1126 20:06:25.049039   59960 start.go:353] cluster config:
	{Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:25.052236   59960 out.go:179] * Starting "ha-278127" primary control-plane node in "ha-278127" cluster
	I1126 20:06:25.055057   59960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:06:25.058039   59960 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:06:25.061008   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:25.061089   59960 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:06:25.061106   59960 cache.go:65] Caching tarball of preloaded images
	I1126 20:06:25.061005   59960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:06:25.061198   59960 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:06:25.061210   59960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:06:25.061353   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:25.080808   59960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:06:25.080831   59960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:06:25.080846   59960 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:06:25.080876   59960 start.go:360] acquireMachinesLock for ha-278127: {Name:mkb106a4eb425a1b9d0e59976741b3f940666d17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:06:25.080933   59960 start.go:364] duration metric: took 35.659µs to acquireMachinesLock for "ha-278127"
	I1126 20:06:25.080951   59960 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:06:25.080956   59960 fix.go:54] fixHost starting: 
	I1126 20:06:25.081217   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:06:25.097737   59960 fix.go:112] recreateIfNeeded on ha-278127: state=Stopped err=<nil>
	W1126 20:06:25.097772   59960 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:06:25.101061   59960 out.go:252] * Restarting existing docker container for "ha-278127" ...
	I1126 20:06:25.101155   59960 cli_runner.go:164] Run: docker start ha-278127
	I1126 20:06:25.385420   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:06:25.411970   59960 kic.go:430] container "ha-278127" state is running.
	I1126 20:06:25.412392   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:25.431941   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:25.432192   59960 machine.go:94] provisionDockerMachine start ...
	I1126 20:06:25.432251   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:25.452939   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:25.453252   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:25.453261   59960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:06:25.454097   59960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44664->127.0.0.1:32828: read: connection reset by peer
	I1126 20:06:28.605461   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127
	
	I1126 20:06:28.605490   59960 ubuntu.go:182] provisioning hostname "ha-278127"
	I1126 20:06:28.605558   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:28.623455   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:28.623769   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:28.623786   59960 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-278127 && echo "ha-278127" | sudo tee /etc/hostname
	I1126 20:06:28.778155   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127
	
	I1126 20:06:28.778256   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:28.794949   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:28.795250   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:28.795271   59960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-278127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-278127/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-278127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:06:28.942212   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:06:28.942238   59960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:06:28.942272   59960 ubuntu.go:190] setting up certificates
	I1126 20:06:28.942281   59960 provision.go:84] configureAuth start
	I1126 20:06:28.942355   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:28.960559   59960 provision.go:143] copyHostCerts
	I1126 20:06:28.960617   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:28.960653   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:06:28.960666   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:28.960744   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:06:28.960844   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:28.960866   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:06:28.960877   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:28.960906   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:06:28.960964   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:28.960985   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:06:28.960993   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:28.961023   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:06:28.961088   59960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.ha-278127 san=[127.0.0.1 192.168.49.2 ha-278127 localhost minikube]
	I1126 20:06:29.153972   59960 provision.go:177] copyRemoteCerts
	I1126 20:06:29.154049   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:06:29.154092   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.171236   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.273352   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1126 20:06:29.273420   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:06:29.290237   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1126 20:06:29.290299   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1126 20:06:29.307794   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1126 20:06:29.307855   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:06:29.325356   59960 provision.go:87] duration metric: took 383.045342ms to configureAuth
	I1126 20:06:29.325387   59960 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:06:29.325626   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:29.325742   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.342790   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:29.343103   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1126 20:06:29.343131   59960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:06:29.721722   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:06:29.721744   59960 machine.go:97] duration metric: took 4.28954331s to provisionDockerMachine
	I1126 20:06:29.721770   59960 start.go:293] postStartSetup for "ha-278127" (driver="docker")
	I1126 20:06:29.721791   59960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:06:29.721855   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:06:29.721907   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.742288   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.845365   59960 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:06:29.848307   59960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:06:29.848344   59960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:06:29.848355   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:06:29.848405   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:06:29.848509   59960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:06:29.848521   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /etc/ssl/certs/41292.pem
	I1126 20:06:29.848614   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:06:29.855777   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:29.872505   59960 start.go:296] duration metric: took 150.71913ms for postStartSetup
	I1126 20:06:29.872582   59960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:06:29.872629   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:29.889019   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:29.990934   59960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:06:29.995268   59960 fix.go:56] duration metric: took 4.914304894s for fixHost
	I1126 20:06:29.995338   59960 start.go:83] releasing machines lock for "ha-278127", held for 4.914396494s
	I1126 20:06:29.995443   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:06:30.012377   59960 ssh_runner.go:195] Run: cat /version.json
	I1126 20:06:30.012396   59960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:06:30.012433   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:30.012448   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:06:30.031079   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:30.032530   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:06:30.145909   59960 ssh_runner.go:195] Run: systemctl --version
	I1126 20:06:30.239511   59960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:06:30.276317   59960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:06:30.280821   59960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:06:30.280919   59960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:06:30.288826   59960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:06:30.288852   59960 start.go:496] detecting cgroup driver to use...
	I1126 20:06:30.288908   59960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:06:30.288973   59960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:06:30.304277   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:06:30.316900   59960 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:06:30.316968   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:06:30.332722   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:06:30.345857   59960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:06:30.458910   59960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:06:30.568914   59960 docker.go:234] disabling docker service ...
	I1126 20:06:30.568992   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:06:30.584111   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:06:30.596826   59960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:06:30.712581   59960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:06:30.831709   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:06:30.843921   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:06:30.857895   59960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:06:30.858007   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.867693   59960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:06:30.867809   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.876639   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.885174   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.893801   59960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:06:30.901606   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.910405   59960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.918408   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:30.927292   59960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:06:30.934726   59960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:06:30.941996   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:31.058637   59960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:06:31.242820   59960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:06:31.242889   59960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:06:31.246945   59960 start.go:564] Will wait 60s for crictl version
	I1126 20:06:31.247023   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:06:31.250523   59960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:06:31.274233   59960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:06:31.274317   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:06:31.302783   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:06:31.335292   59960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:06:31.338152   59960 cli_runner.go:164] Run: docker network inspect ha-278127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:06:31.354467   59960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 20:06:31.358251   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:06:31.368693   59960 kubeadm.go:884] updating cluster {Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubeta
il:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:06:31.368839   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:31.368891   59960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:06:31.403727   59960 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:06:31.403752   59960 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:06:31.404010   59960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:06:31.431423   59960 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:06:31.431446   59960 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:06:31.431457   59960 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1126 20:06:31.431560   59960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-278127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:06:31.431642   59960 ssh_runner.go:195] Run: crio config
	I1126 20:06:31.500147   59960 cni.go:84] Creating CNI manager for ""
	I1126 20:06:31.500186   59960 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1126 20:06:31.500211   59960 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:06:31.500236   59960 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-278127 NodeName:ha-278127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:06:31.500354   59960 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-278127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:06:31.500372   59960 kube-vip.go:115] generating kube-vip config ...
	I1126 20:06:31.500428   59960 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1126 20:06:31.512046   59960 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:06:31.512210   59960 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1126 20:06:31.512299   59960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:06:31.519877   59960 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:06:31.519973   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1126 20:06:31.527497   59960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1126 20:06:31.540828   59960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:06:31.553623   59960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1126 20:06:31.566105   59960 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1126 20:06:31.578838   59960 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1126 20:06:31.582461   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:06:31.592186   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:31.707439   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:06:31.722268   59960 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127 for IP: 192.168.49.2
	I1126 20:06:31.722291   59960 certs.go:195] generating shared ca certs ...
	I1126 20:06:31.722307   59960 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:31.722445   59960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:06:31.722497   59960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:06:31.722508   59960 certs.go:257] generating profile certs ...
	I1126 20:06:31.722593   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key
	I1126 20:06:31.722624   59960 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab
	I1126 20:06:31.722643   59960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1126 20:06:32.010576   59960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab ...
	I1126 20:06:32.010610   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab: {Name:mk952cf244227c47330a0f303648b46942398499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.010819   59960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab ...
	I1126 20:06:32.010835   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab: {Name:mk44577b028f8c1bee471863ff089cc458df619d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.010930   59960 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt.628cddab -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt
	I1126 20:06:32.011078   59960 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.628cddab -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key
	I1126 20:06:32.011225   59960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key
	I1126 20:06:32.011244   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1126 20:06:32.011263   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1126 20:06:32.011280   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1126 20:06:32.011297   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1126 20:06:32.011315   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1126 20:06:32.011331   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1126 20:06:32.011348   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1126 20:06:32.011362   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1126 20:06:32.011414   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:06:32.011456   59960 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:06:32.011469   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:06:32.011501   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:06:32.011530   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:06:32.011558   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:06:32.011608   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:32.011640   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.011656   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.011666   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem -> /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.012331   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:06:32.032881   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:06:32.054562   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:06:32.072828   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:06:32.091195   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:06:32.109160   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:06:32.126721   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:06:32.143729   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:06:32.162210   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:06:32.179022   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:06:32.196402   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:06:32.213770   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:06:32.227414   59960 ssh_runner.go:195] Run: openssl version
	I1126 20:06:32.233654   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:06:32.243718   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.247376   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.247448   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:06:32.289532   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:06:32.297668   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:06:32.306080   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.309793   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.309880   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:06:32.353652   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:06:32.364544   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:06:32.373430   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.381651   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.381803   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:06:32.434961   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:06:32.448704   59960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:06:32.454552   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:06:32.518905   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:06:32.599420   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:06:32.673604   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:06:32.734602   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:06:32.794948   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:06:32.842245   59960 kubeadm.go:401] StartCluster: {Name:ha-278127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:06:32.842417   59960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:06:32.842512   59960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:06:32.887488   59960 cri.go:89] found id: "f5647f1652cc11a195a49a98906391e791c3136916a5e3c249907585088fad42"
	I1126 20:06:32.887548   59960 cri.go:89] found id: "1ed2c42e7047cc402ab04fdadafa16acc5208b12eede0475826c97d34c9a071f"
	I1126 20:06:32.887577   59960 cri.go:89] found id: "040a8549001808f2d3fce3d4cf9f8dff272706173960c5e8004af8b1ea042e80"
	I1126 20:06:32.887595   59960 cri.go:89] found id: "106da3c0ad4fa03ae491f571375cda1a123fe52e6f7ef39170a84c273267c713"
	I1126 20:06:32.887614   59960 cri.go:89] found id: "cdc1651fea8f10bd665928dcc7bb174b74385eb06e911da9629df17c0d9d29e8"
	I1126 20:06:32.887650   59960 cri.go:89] found id: ""
	I1126 20:06:32.887728   59960 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:06:32.910884   59960 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:06:32Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:06:32.911021   59960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:06:32.933474   59960 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:06:32.933554   59960 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:06:32.933631   59960 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:06:32.956246   59960 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:06:32.956760   59960 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-278127" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:32.956919   59960 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-2326/kubeconfig needs updating (will repair): [kubeconfig missing "ha-278127" cluster setting kubeconfig missing "ha-278127" context setting]
	I1126 20:06:32.957299   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.957946   59960 kapi.go:59] client config for ha-278127: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:06:32.958772   59960 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1126 20:06:32.958857   59960 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1126 20:06:32.958878   59960 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1126 20:06:32.958921   59960 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1126 20:06:32.958940   59960 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1126 20:06:32.958837   59960 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1126 20:06:32.959354   59960 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:06:32.974056   59960 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1126 20:06:32.974125   59960 kubeadm.go:602] duration metric: took 40.551528ms to restartPrimaryControlPlane
	I1126 20:06:32.974150   59960 kubeadm.go:403] duration metric: took 131.91251ms to StartCluster
	I1126 20:06:32.974180   59960 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.974282   59960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:06:32.974978   59960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:06:32.975243   59960 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:06:32.975297   59960 start.go:242] waiting for startup goroutines ...
	I1126 20:06:32.975325   59960 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:06:32.975918   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:32.981231   59960 out.go:179] * Enabled addons: 
	I1126 20:06:32.984100   59960 addons.go:530] duration metric: took 8.777007ms for enable addons: enabled=[]
	I1126 20:06:32.984180   59960 start.go:247] waiting for cluster config update ...
	I1126 20:06:32.984203   59960 start.go:256] writing updated cluster config ...
	I1126 20:06:32.987492   59960 out.go:203] 
	I1126 20:06:32.990613   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:32.990800   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:32.994017   59960 out.go:179] * Starting "ha-278127-m02" control-plane node in "ha-278127" cluster
	I1126 20:06:32.996802   59960 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:06:32.999792   59960 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:06:33.002700   59960 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:06:33.002740   59960 cache.go:65] Caching tarball of preloaded images
	I1126 20:06:33.002860   59960 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:06:33.002893   59960 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:06:33.003031   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:33.003254   59960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:06:33.039303   59960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:06:33.039323   59960 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:06:33.039336   59960 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:06:33.039360   59960 start.go:360] acquireMachinesLock for ha-278127-m02: {Name:mkfa715e07e067116cf6c4854164186af5a39436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:06:33.039417   59960 start.go:364] duration metric: took 41.518µs to acquireMachinesLock for "ha-278127-m02"
	I1126 20:06:33.039439   59960 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:06:33.039445   59960 fix.go:54] fixHost starting: m02
	I1126 20:06:33.039721   59960 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:06:33.071417   59960 fix.go:112] recreateIfNeeded on ha-278127-m02: state=Stopped err=<nil>
	W1126 20:06:33.071449   59960 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:06:33.074580   59960 out.go:252] * Restarting existing docker container for "ha-278127-m02" ...
	I1126 20:06:33.074664   59960 cli_runner.go:164] Run: docker start ha-278127-m02
	I1126 20:06:33.452368   59960 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:06:33.483474   59960 kic.go:430] container "ha-278127-m02" state is running.
	I1126 20:06:33.483869   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:33.512602   59960 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/config.json ...
	I1126 20:06:33.512851   59960 machine.go:94] provisionDockerMachine start ...
	I1126 20:06:33.512917   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:33.539611   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:33.539907   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:33.539915   59960 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:06:33.540557   59960 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35216->127.0.0.1:32833: read: connection reset by peer
	I1126 20:06:36.755151   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127-m02
	
	I1126 20:06:36.755173   59960 ubuntu.go:182] provisioning hostname "ha-278127-m02"
	I1126 20:06:36.755238   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:36.783610   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:36.783923   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:36.783950   59960 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-278127-m02 && echo "ha-278127-m02" | sudo tee /etc/hostname
	I1126 20:06:37.026368   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-278127-m02
	
	I1126 20:06:37.026488   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:37.056257   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:37.056574   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:37.056592   59960 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-278127-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-278127-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-278127-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:06:37.278605   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:06:37.278692   59960 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:06:37.278724   59960 ubuntu.go:190] setting up certificates
	I1126 20:06:37.278764   59960 provision.go:84] configureAuth start
	I1126 20:06:37.278849   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:37.306165   59960 provision.go:143] copyHostCerts
	I1126 20:06:37.306207   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:37.306246   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:06:37.306253   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:06:37.306332   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:06:37.306421   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:37.306441   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:06:37.306445   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:06:37.306474   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:06:37.306512   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:37.306528   59960 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:06:37.306532   59960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:06:37.306553   59960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:06:37.306602   59960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.ha-278127-m02 san=[127.0.0.1 192.168.49.3 ha-278127-m02 localhost minikube]
	I1126 20:06:37.781886   59960 provision.go:177] copyRemoteCerts
	I1126 20:06:37.782050   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:06:37.782113   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:37.799978   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:37.920744   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1126 20:06:37.920800   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:06:37.946353   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1126 20:06:37.946424   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 20:06:37.990628   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1126 20:06:37.990734   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:06:38.022932   59960 provision.go:87] duration metric: took 744.14174ms to configureAuth
	I1126 20:06:38.022999   59960 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:06:38.023281   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:38.023419   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:38.055902   59960 main.go:143] libmachine: Using SSH client type: native
	I1126 20:06:38.056219   59960 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1126 20:06:38.056232   59960 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:06:39.163004   59960 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:06:39.163066   59960 machine.go:97] duration metric: took 5.650194842s to provisionDockerMachine
	I1126 20:06:39.163087   59960 start.go:293] postStartSetup for "ha-278127-m02" (driver="docker")
	I1126 20:06:39.163098   59960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:06:39.163204   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:06:39.163258   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.194111   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.327619   59960 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:06:39.331483   59960 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:06:39.331507   59960 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:06:39.331518   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:06:39.331574   59960 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:06:39.331649   59960 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:06:39.331655   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /etc/ssl/certs/41292.pem
	I1126 20:06:39.331756   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:06:39.344886   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:06:39.377797   59960 start.go:296] duration metric: took 214.695598ms for postStartSetup
	I1126 20:06:39.377880   59960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:06:39.377991   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.402878   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.525023   59960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:06:39.531527   59960 fix.go:56] duration metric: took 6.492076268s for fixHost
	I1126 20:06:39.531551   59960 start.go:83] releasing machines lock for "ha-278127-m02", held for 6.492125467s
	I1126 20:06:39.531622   59960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m02
	I1126 20:06:39.571062   59960 out.go:179] * Found network options:
	I1126 20:06:39.574101   59960 out.go:179]   - NO_PROXY=192.168.49.2
	W1126 20:06:39.577135   59960 proxy.go:120] fail to check proxy env: Error ip not in block
	W1126 20:06:39.577189   59960 proxy.go:120] fail to check proxy env: Error ip not in block
	I1126 20:06:39.577283   59960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:06:39.577298   59960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:06:39.577325   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.577353   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m02
	I1126 20:06:39.610149   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.618182   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m02/id_rsa Username:docker}
	I1126 20:06:39.847910   59960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:06:39.986067   59960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:06:39.986218   59960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:06:40.010567   59960 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:06:40.010651   59960 start.go:496] detecting cgroup driver to use...
	I1126 20:06:40.010701   59960 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:06:40.010777   59960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:06:40.066499   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:06:40.113187   59960 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:06:40.113357   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:06:40.138505   59960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:06:40.165558   59960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:06:40.434812   59960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:06:40.667360   59960 docker.go:234] disabling docker service ...
	I1126 20:06:40.667485   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:06:40.689020   59960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:06:40.712251   59960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:06:41.062262   59960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:06:41.446879   59960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:06:41.479018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:06:41.522736   59960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:06:41.522836   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.550554   59960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:06:41.550640   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.568877   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.605965   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.634535   59960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:06:41.647439   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.679616   59960 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.700895   59960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:06:41.724575   59960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:06:41.743621   59960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:06:41.761053   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:06:42.179518   59960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:08:12.654700   59960 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.475140858s)
	I1126 20:08:12.654725   59960 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:08:12.654777   59960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:08:12.658561   59960 start.go:564] Will wait 60s for crictl version
	I1126 20:08:12.658629   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:08:12.662122   59960 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:08:12.694230   59960 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:08:12.694320   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:08:12.723516   59960 ssh_runner.go:195] Run: crio --version
	I1126 20:08:12.752895   59960 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:08:12.755800   59960 out.go:179]   - env NO_PROXY=192.168.49.2
	I1126 20:08:12.758681   59960 cli_runner.go:164] Run: docker network inspect ha-278127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:08:12.774831   59960 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1126 20:08:12.778729   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:08:12.788193   59960 mustload.go:66] Loading cluster: ha-278127
	I1126 20:08:12.788437   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:08:12.788732   59960 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:08:12.805367   59960 host.go:66] Checking if "ha-278127" exists ...
	I1126 20:08:12.805673   59960 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127 for IP: 192.168.49.3
	I1126 20:08:12.805688   59960 certs.go:195] generating shared ca certs ...
	I1126 20:08:12.805703   59960 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:08:12.805829   59960 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:08:12.805875   59960 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:08:12.805885   59960 certs.go:257] generating profile certs ...
	I1126 20:08:12.806061   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key
	I1126 20:08:12.806134   59960 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key.28ad082f
	I1126 20:08:12.806177   59960 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key
	I1126 20:08:12.806189   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1126 20:08:12.806203   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1126 20:08:12.806214   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1126 20:08:12.806227   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1126 20:08:12.806238   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1126 20:08:12.806249   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1126 20:08:12.806265   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1126 20:08:12.806276   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1126 20:08:12.806330   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:08:12.806364   59960 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:08:12.806376   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:08:12.806404   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:08:12.806431   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:08:12.806458   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:08:12.806505   59960 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:08:12.806543   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:12.806557   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem -> /usr/share/ca-certificates/4129.pem
	I1126 20:08:12.806568   59960 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> /usr/share/ca-certificates/41292.pem
	I1126 20:08:12.806631   59960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:08:12.824408   59960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:08:12.926228   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1126 20:08:12.930801   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1126 20:08:12.939401   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1126 20:08:12.947934   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1126 20:08:12.960335   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1126 20:08:12.964526   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1126 20:08:12.973104   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1126 20:08:12.978204   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1126 20:08:12.987576   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1126 20:08:12.991901   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1126 20:08:13.001289   59960 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1126 20:08:13.006200   59960 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1126 20:08:13.014443   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:08:13.039341   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:08:13.063520   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:08:13.085219   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:08:13.103037   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:08:13.123095   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:08:13.140681   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:08:13.160781   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:08:13.180406   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:08:13.200475   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:08:13.221024   59960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:08:13.239900   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1126 20:08:13.254738   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1126 20:08:13.269631   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1126 20:08:13.285317   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1126 20:08:13.300359   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1126 20:08:13.320893   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1126 20:08:13.340300   59960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1126 20:08:13.361527   59960 ssh_runner.go:195] Run: openssl version
	I1126 20:08:13.368555   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:08:13.377244   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.381511   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.381624   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:08:13.427936   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:08:13.437023   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:08:13.445274   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.449571   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.449682   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:08:13.496315   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:08:13.504808   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:08:13.513181   59960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.517313   59960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.517396   59960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:08:13.579337   59960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:08:13.588179   59960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:08:13.593330   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:08:13.645107   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:08:13.691020   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:08:13.735436   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:08:13.780762   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:08:13.830095   59960 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:08:13.873290   59960 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1126 20:08:13.873415   59960 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-278127-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-278127 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:08:13.873445   59960 kube-vip.go:115] generating kube-vip config ...
	I1126 20:08:13.873508   59960 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1126 20:08:13.885513   59960 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:08:13.885577   59960 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1126 20:08:13.885657   59960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:08:13.893550   59960 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:08:13.893628   59960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1126 20:08:13.901912   59960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1126 20:08:13.916015   59960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:08:13.934936   59960 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1126 20:08:13.979363   59960 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1126 20:08:13.991396   59960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:08:14.018397   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:08:14.385132   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:08:14.402828   59960 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:08:14.403147   59960 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:08:14.408967   59960 out.go:179] * Verifying Kubernetes components...
	I1126 20:08:14.411916   59960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:08:14.659853   59960 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:08:14.678979   59960 kapi.go:59] client config for ha-278127: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/ha-278127/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1126 20:08:14.679061   59960 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1126 20:08:14.679322   59960 node_ready.go:35] waiting up to 6m0s for node "ha-278127-m02" to be "Ready" ...
	I1126 20:08:15.269402   59960 node_ready.go:49] node "ha-278127-m02" is "Ready"
	I1126 20:08:15.269438   59960 node_ready.go:38] duration metric: took 590.083677ms for node "ha-278127-m02" to be "Ready" ...
	I1126 20:08:15.269450   59960 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:08:15.269508   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:15.770378   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:16.271005   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:16.769624   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:17.269646   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:17.770292   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:18.270233   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:18.770225   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:19.269626   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:19.770251   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:20.270592   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:20.769691   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:21.269742   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:21.769575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:22.269640   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:22.770094   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:23.269745   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:23.770093   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:24.269839   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:24.770626   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:25.270510   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:25.770352   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:26.270238   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:26.770199   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:27.270553   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:27.770570   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:28.269631   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:28.770575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:29.269663   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:29.770438   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:30.269733   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:30.769570   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:31.269688   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:31.770556   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:32.270505   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:32.770152   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:33.269716   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:33.769765   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:34.269659   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:34.769641   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:35.269866   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:35.770030   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:36.270158   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:36.770014   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:37.270234   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:37.769610   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:38.270567   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:38.770558   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:39.269653   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:39.769895   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:40.270407   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:40.769781   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:41.270338   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:41.770411   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:42.269686   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:42.770028   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:43.269580   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:43.769636   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:44.269684   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:44.769627   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:45.272055   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:45.770418   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:46.269657   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:46.770575   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:47.270036   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:47.770377   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:48.270502   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:48.770450   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:49.269719   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:49.770449   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:50.269903   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:50.769675   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:51.270539   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:51.770618   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:52.270336   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:52.770354   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:53.270340   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:53.769901   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:54.270054   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:54.769747   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:55.270283   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:55.770525   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:56.269881   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:56.769908   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:57.269834   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:57.769631   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:58.270414   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:58.770529   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:59.269820   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:08:59.770577   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:00.269749   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:00.770275   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:01.270165   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:01.769910   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:02.269673   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:02.770492   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:03.270339   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:03.769642   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:04.269668   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:04.770177   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:05.270062   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:05.770571   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:06.270286   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:06.770466   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:07.269878   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:07.770593   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:08.270292   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:08.770068   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:09.269767   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:09.769619   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:10.270146   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:10.769659   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:11.270311   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:11.770596   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:12.269893   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:12.769649   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:13.270341   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:13.770530   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:14.269596   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:14.769532   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:14.769644   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:14.805181   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:14.805204   59960 cri.go:89] found id: ""
	I1126 20:09:14.805213   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:14.805269   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.809129   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:14.809206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:14.835451   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:14.835475   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:14.835480   59960 cri.go:89] found id: ""
	I1126 20:09:14.835487   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:14.835543   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.839249   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.842501   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:14.842574   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:14.867922   59960 cri.go:89] found id: ""
	I1126 20:09:14.867948   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.867957   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:14.867963   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:14.868022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:14.893599   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:14.893625   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:14.893630   59960 cri.go:89] found id: ""
	I1126 20:09:14.893638   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:14.893730   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.897540   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.901438   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:14.901540   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:14.929244   59960 cri.go:89] found id: ""
	I1126 20:09:14.929268   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.929277   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:14.929284   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:14.929340   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:14.956242   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:14.956264   59960 cri.go:89] found id: ""
	I1126 20:09:14.956272   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:14.956326   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:14.960197   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:14.960271   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:14.985332   59960 cri.go:89] found id: ""
	I1126 20:09:14.985407   59960 logs.go:282] 0 containers: []
	W1126 20:09:14.985428   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:14.985455   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:14.985495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:15.015412   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:15.015491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:15.446082   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:15.438231    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.438877    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440458    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440891    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.442380    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:15.438231    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.438877    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440458    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.440891    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:15.442380    1519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:15.446107   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:15.446122   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:15.474426   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:15.474452   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:15.514330   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:15.514364   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:15.582633   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:15.582662   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:15.636475   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:15.636508   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:15.718181   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:15.718215   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:15.814217   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:15.814253   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:15.826793   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:15.826823   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:15.854520   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:15.854550   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.382038   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:18.401602   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:18.401678   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:18.435808   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:18.435831   59960 cri.go:89] found id: ""
	I1126 20:09:18.435839   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:18.435907   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.439686   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:18.439801   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:18.476740   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:18.476764   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:18.476770   59960 cri.go:89] found id: ""
	I1126 20:09:18.476787   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:18.476889   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.480732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.484682   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:18.484783   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:18.511910   59960 cri.go:89] found id: ""
	I1126 20:09:18.511974   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.511989   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:18.511996   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:18.512055   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:18.547921   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:18.547988   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:18.548006   59960 cri.go:89] found id: ""
	I1126 20:09:18.548014   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:18.548071   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.552076   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.556982   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:18.557066   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:18.587286   59960 cri.go:89] found id: ""
	I1126 20:09:18.587313   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.587333   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:18.587340   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:18.587401   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:18.620541   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.620559   59960 cri.go:89] found id: ""
	I1126 20:09:18.620567   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:18.620626   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:18.624723   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:18.624796   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:18.653037   59960 cri.go:89] found id: ""
	I1126 20:09:18.653060   59960 logs.go:282] 0 containers: []
	W1126 20:09:18.653068   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:18.653077   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:18.653090   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:18.684308   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:18.684335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:18.776764   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:18.776798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:18.865581   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:18.856655    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858014    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858939    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.859710    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.861248    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:18.856655    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858014    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.858939    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.859710    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:18.861248    1653 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:18.865603   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:18.865616   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:18.909234   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:18.909270   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:18.960436   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:18.960477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:18.990735   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:18.990766   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:19.069643   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:19.069722   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:19.104112   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:19.104137   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:19.118175   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:19.118204   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:19.148200   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:19.148229   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:21.687827   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:21.698536   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:21.698621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:21.730147   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:21.730171   59960 cri.go:89] found id: ""
	I1126 20:09:21.730180   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:21.730235   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.735922   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:21.736012   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:21.763452   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:21.763481   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:21.763486   59960 cri.go:89] found id: ""
	I1126 20:09:21.763494   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:21.763551   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.767451   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.771041   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:21.771140   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:21.803663   59960 cri.go:89] found id: ""
	I1126 20:09:21.803688   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.803697   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:21.803703   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:21.803767   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:21.832470   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:21.832496   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:21.832501   59960 cri.go:89] found id: ""
	I1126 20:09:21.832510   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:21.832567   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.836410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.840076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:21.840157   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:21.866968   59960 cri.go:89] found id: ""
	I1126 20:09:21.866994   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.867004   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:21.867011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:21.867093   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:21.892977   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:21.893000   59960 cri.go:89] found id: ""
	I1126 20:09:21.893008   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:21.893083   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:21.896906   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:21.897019   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:21.923720   59960 cri.go:89] found id: ""
	I1126 20:09:21.923744   59960 logs.go:282] 0 containers: []
	W1126 20:09:21.923753   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:21.923762   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:21.923793   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:22.011751   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:22.003342    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.003880    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.005519    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.006189    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.007784    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:22.003342    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.003880    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.005519    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.006189    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:22.007784    1780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:22.011856   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:22.011890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:22.042091   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:22.042121   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:22.079857   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:22.079886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:22.179933   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:22.179973   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:22.207540   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:22.207568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:22.263434   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:22.263465   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:22.313145   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:22.313180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:22.365142   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:22.365177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:22.446886   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:22.446920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:22.483927   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:22.483961   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:24.996823   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:25.007913   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:25.007987   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:25.044777   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:25.044801   59960 cri.go:89] found id: ""
	I1126 20:09:25.044810   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:25.044870   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.048843   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:25.048923   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:25.083120   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:25.083187   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:25.083197   59960 cri.go:89] found id: ""
	I1126 20:09:25.083205   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:25.083271   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.086865   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.090526   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:25.090596   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:25.118710   59960 cri.go:89] found id: ""
	I1126 20:09:25.118735   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.118745   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:25.118752   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:25.118809   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:25.145818   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:25.145843   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:25.145850   59960 cri.go:89] found id: ""
	I1126 20:09:25.145857   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:25.145956   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.154268   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.159267   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:25.159348   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:25.185977   59960 cri.go:89] found id: ""
	I1126 20:09:25.186002   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.186011   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:25.186017   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:25.186072   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:25.213727   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:25.213751   59960 cri.go:89] found id: ""
	I1126 20:09:25.213760   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:25.213826   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:25.217850   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:25.217960   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:25.246743   59960 cri.go:89] found id: ""
	I1126 20:09:25.246769   59960 logs.go:282] 0 containers: []
	W1126 20:09:25.246779   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:25.246788   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:25.246800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:25.321227   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:25.312798    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.313456    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315126    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315598    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.317138    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:25.312798    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.313456    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315126    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.315598    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:25.317138    1919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:25.321251   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:25.321288   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:25.346983   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:25.347011   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:25.407991   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:25.408027   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:25.439857   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:25.439886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:25.467227   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:25.467252   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:25.549334   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:25.549371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:25.590791   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:25.590821   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:25.636096   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:25.636130   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:25.668287   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:25.668314   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:25.765804   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:25.765838   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:28.279160   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:28.290077   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:28.290149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:28.320697   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:28.320720   59960 cri.go:89] found id: ""
	I1126 20:09:28.320729   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:28.320786   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.324391   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:28.324466   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:28.351072   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:28.351094   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:28.351099   59960 cri.go:89] found id: ""
	I1126 20:09:28.351106   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:28.351161   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.355739   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.359260   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:28.359346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:28.386343   59960 cri.go:89] found id: ""
	I1126 20:09:28.386370   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.386383   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:28.386390   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:28.386457   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:28.413613   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:28.413635   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:28.413641   59960 cri.go:89] found id: ""
	I1126 20:09:28.413648   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:28.413701   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.417403   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.420731   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:28.420810   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:28.446127   59960 cri.go:89] found id: ""
	I1126 20:09:28.446202   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.446225   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:28.446245   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:28.446337   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:28.471432   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:28.471454   59960 cri.go:89] found id: ""
	I1126 20:09:28.471462   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:28.471545   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:28.475058   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:28.475141   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:28.502515   59960 cri.go:89] found id: ""
	I1126 20:09:28.502539   59960 logs.go:282] 0 containers: []
	W1126 20:09:28.502549   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:28.502559   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:28.502570   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:28.514608   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:28.514637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:28.557861   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:28.557890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:28.627880   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:28.627917   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:28.659730   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:28.659757   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:28.725495   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:28.717349    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.718072    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.719611    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.720154    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.722097    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:28.717349    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.718072    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.719611    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.720154    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:28.722097    2095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:28.725519   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:28.725532   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:28.763157   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:28.763187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:28.828543   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:28.828573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:28.855674   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:28.855707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:28.888296   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:28.888323   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:28.966101   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:28.966135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:31.560965   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:31.571673   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:31.571744   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:31.601161   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:31.601182   59960 cri.go:89] found id: ""
	I1126 20:09:31.601190   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:31.601269   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.605397   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:31.605476   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:31.631813   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:31.631835   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:31.631841   59960 cri.go:89] found id: ""
	I1126 20:09:31.631848   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:31.631904   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.635710   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.639546   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:31.639621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:31.674540   59960 cri.go:89] found id: ""
	I1126 20:09:31.674569   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.674578   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:31.674585   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:31.674643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:31.705780   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:31.705799   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:31.705803   59960 cri.go:89] found id: ""
	I1126 20:09:31.705810   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:31.705865   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.709862   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.713500   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:31.713591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:31.739394   59960 cri.go:89] found id: ""
	I1126 20:09:31.739419   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.739429   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:31.739435   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:31.739492   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:31.765811   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:31.765834   59960 cri.go:89] found id: ""
	I1126 20:09:31.765842   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:31.765960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:31.769463   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:31.769554   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:31.802081   59960 cri.go:89] found id: ""
	I1126 20:09:31.802107   59960 logs.go:282] 0 containers: []
	W1126 20:09:31.802116   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:31.802153   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:31.802172   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:31.849273   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:31.849308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:31.902662   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:31.902697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:31.990675   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:31.990710   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:32.022637   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:32.022667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:32.100797   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:32.092180    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.093036    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.094703    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.095415    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.097142    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:32.092180    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.093036    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.094703    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.095415    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:32.097142    2234 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:32.100820   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:32.100833   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:32.146149   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:32.146184   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:32.172943   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:32.172970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:32.199037   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:32.199063   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:32.306507   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:32.306540   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:32.319193   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:32.319221   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:34.849302   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:34.860158   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:34.860250   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:34.887094   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:34.887113   59960 cri.go:89] found id: ""
	I1126 20:09:34.887121   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:34.887177   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.890890   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:34.890964   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:34.921149   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:34.921177   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:34.921182   59960 cri.go:89] found id: ""
	I1126 20:09:34.921189   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:34.921243   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.924938   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.928493   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:34.928569   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:34.954052   59960 cri.go:89] found id: ""
	I1126 20:09:34.954078   59960 logs.go:282] 0 containers: []
	W1126 20:09:34.954087   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:34.954093   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:34.954206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:34.985031   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:34.985054   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:34.985059   59960 cri.go:89] found id: ""
	I1126 20:09:34.985067   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:34.985121   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.989050   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:34.992852   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:34.992934   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:35.019287   59960 cri.go:89] found id: ""
	I1126 20:09:35.019314   59960 logs.go:282] 0 containers: []
	W1126 20:09:35.019323   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:35.019330   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:35.019393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:35.049190   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:35.049217   59960 cri.go:89] found id: ""
	I1126 20:09:35.049237   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:35.049313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:35.053627   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:35.053713   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:35.091326   59960 cri.go:89] found id: ""
	I1126 20:09:35.091394   59960 logs.go:282] 0 containers: []
	W1126 20:09:35.091420   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:35.091440   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:35.091476   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:35.188523   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:35.188560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:35.220725   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:35.220755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:35.250614   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:35.250643   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:35.289963   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:35.289995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:35.303153   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:35.303180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:35.375929   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:35.367382    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.368117    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.369869    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.370618    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.372228    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:35.367382    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.368117    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.369869    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.370618    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:35.372228    2375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:35.375952   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:35.375968   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:35.403037   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:35.403066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:35.445367   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:35.445402   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:35.491101   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:35.491135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:35.561489   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:35.561524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:38.150634   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:38.161275   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:38.161346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:38.189434   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:38.189461   59960 cri.go:89] found id: ""
	I1126 20:09:38.189469   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:38.189530   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.195206   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:38.195288   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:38.223137   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:38.223160   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:38.223166   59960 cri.go:89] found id: ""
	I1126 20:09:38.223173   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:38.223227   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.226977   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.230547   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:38.230624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:38.255698   59960 cri.go:89] found id: ""
	I1126 20:09:38.255723   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.255732   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:38.255742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:38.255800   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:38.285059   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:38.285082   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:38.285087   59960 cri.go:89] found id: ""
	I1126 20:09:38.285097   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:38.285151   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.288799   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.292713   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:38.292786   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:38.318862   59960 cri.go:89] found id: ""
	I1126 20:09:38.318889   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.318898   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:38.318905   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:38.318963   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:38.346973   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:38.346996   59960 cri.go:89] found id: ""
	I1126 20:09:38.347005   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:38.347057   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:38.350729   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:38.350856   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:38.378801   59960 cri.go:89] found id: ""
	I1126 20:09:38.378827   59960 logs.go:282] 0 containers: []
	W1126 20:09:38.378836   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:38.378845   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:38.378915   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:38.390980   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:38.391009   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:38.422522   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:38.422550   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:38.469058   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:38.469133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:38.523109   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:38.523182   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:38.559691   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:38.559716   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:38.646468   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:38.646504   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:38.751509   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:38.751551   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:38.836492   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:38.827693    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.828759    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.829560    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.830636    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.831318    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:38.827693    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.828759    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.829560    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.830636    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:38.831318    2526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:38.836516   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:38.836528   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:38.876587   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:38.876623   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:38.910948   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:38.910987   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:41.443533   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:41.454798   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:41.454873   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:41.485670   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:41.485699   59960 cri.go:89] found id: ""
	I1126 20:09:41.485707   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:41.485761   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.489619   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:41.489690   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:41.525686   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:41.525710   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:41.525714   59960 cri.go:89] found id: ""
	I1126 20:09:41.525722   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:41.525777   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.536491   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.541670   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:41.541797   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:41.570295   59960 cri.go:89] found id: ""
	I1126 20:09:41.570319   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.570327   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:41.570334   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:41.570393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:41.598145   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:41.598169   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:41.598175   59960 cri.go:89] found id: ""
	I1126 20:09:41.598182   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:41.598258   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.602230   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.606445   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:41.606530   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:41.636614   59960 cri.go:89] found id: ""
	I1126 20:09:41.636637   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.636646   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:41.636652   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:41.636707   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:41.663292   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:41.663315   59960 cri.go:89] found id: ""
	I1126 20:09:41.663327   59960 logs.go:282] 1 containers: [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:41.663382   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:41.667194   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:41.667277   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:41.696056   59960 cri.go:89] found id: ""
	I1126 20:09:41.696081   59960 logs.go:282] 0 containers: []
	W1126 20:09:41.696090   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:41.696099   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:41.696110   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:41.794427   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:41.794463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:41.822463   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:41.822493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:41.871566   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:41.871599   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:41.916725   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:41.916759   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:41.950381   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:41.950410   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:41.982658   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:41.982692   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:41.996639   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:41.996672   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:42.087350   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:42.079184    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.079744    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081320    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081972    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.083647    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:42.079184    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.079744    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081320    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.081972    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:42.083647    2671 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:42.087369   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:42.087384   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:42.175919   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:42.176012   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:42.281379   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:42.281406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:44.882212   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:44.893873   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:44.893969   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:44.923663   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:44.923683   59960 cri.go:89] found id: ""
	I1126 20:09:44.923691   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:44.923744   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.927892   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:44.927959   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:44.958403   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:44.958423   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:44.958427   59960 cri.go:89] found id: ""
	I1126 20:09:44.958434   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:44.958486   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.962367   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:44.966913   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:44.966985   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:45.000482   59960 cri.go:89] found id: ""
	I1126 20:09:45.000503   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.000511   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:45.000517   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:45.000572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:45.031381   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:45.031401   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:45.031406   59960 cri.go:89] found id: ""
	I1126 20:09:45.031414   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:45.031471   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.036637   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.042551   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:45.042723   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:45.086906   59960 cri.go:89] found id: ""
	I1126 20:09:45.086987   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.087026   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:45.087050   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:45.087153   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:45.137504   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:45.137578   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:45.137598   59960 cri.go:89] found id: ""
	I1126 20:09:45.137621   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:45.137715   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.143678   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:45.149235   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:45.149438   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:45.196979   59960 cri.go:89] found id: ""
	I1126 20:09:45.197063   59960 logs.go:282] 0 containers: []
	W1126 20:09:45.197089   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:45.197146   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:45.197191   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:45.267194   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:45.267280   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:45.386434   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:45.386524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:45.468233   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:45.459943    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.460742    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462336    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462624    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.464644    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:45.459943    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.460742    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462336    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.462624    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:45.464644    2775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:45.468305   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:45.468342   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:45.541622   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:45.541649   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:45.613664   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:45.613695   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:45.641765   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:45.641794   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:45.702809   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:45.702837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:45.807019   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:45.807056   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:45.820258   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:45.820289   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:45.867345   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:45.867376   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:45.921560   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:45.921596   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:48.454091   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:48.464670   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:48.464755   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:48.493056   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:48.493081   59960 cri.go:89] found id: ""
	I1126 20:09:48.493089   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:48.493144   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.496943   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:48.497007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:48.524995   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:48.525020   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:48.525025   59960 cri.go:89] found id: ""
	I1126 20:09:48.525032   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:48.525085   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.528726   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.532247   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:48.532317   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:48.557862   59960 cri.go:89] found id: ""
	I1126 20:09:48.557887   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.557896   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:48.557902   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:48.557988   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:48.587744   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:48.587765   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:48.587770   59960 cri.go:89] found id: ""
	I1126 20:09:48.587777   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:48.587832   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.591388   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.594875   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:48.594985   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:48.627277   59960 cri.go:89] found id: ""
	I1126 20:09:48.627298   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.627313   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:48.627352   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:48.627433   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:48.664063   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:48.664088   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:48.664102   59960 cri.go:89] found id: ""
	I1126 20:09:48.664110   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:48.664222   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.668219   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:48.671608   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:48.671680   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:48.700294   59960 cri.go:89] found id: ""
	I1126 20:09:48.700322   59960 logs.go:282] 0 containers: []
	W1126 20:09:48.700331   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:48.700340   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:48.700351   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:48.793887   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:48.793974   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:48.807445   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:48.807472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:48.881133   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:48.873596    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.874156    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.875737    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.876232    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.877299    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:48.873596    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.874156    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.875737    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.876232    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:48.877299    2915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:48.881155   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:48.881167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:48.926338   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:48.926370   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:48.980929   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:48.980964   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:49.008703   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:49.008729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:49.035020   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:49.035134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:49.075209   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:49.075239   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:49.102778   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:49.102808   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:49.148209   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:49.148243   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:49.175449   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:49.175477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:51.750461   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:51.761173   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:51.761247   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:51.792174   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:51.792200   59960 cri.go:89] found id: ""
	I1126 20:09:51.792207   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:51.792272   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.796194   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:51.796266   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:51.826309   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:51.826333   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:51.826339   59960 cri.go:89] found id: ""
	I1126 20:09:51.826346   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:51.826408   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.830049   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.833626   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:51.833703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:51.864668   59960 cri.go:89] found id: ""
	I1126 20:09:51.864693   59960 logs.go:282] 0 containers: []
	W1126 20:09:51.864702   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:51.864709   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:51.864770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:51.902154   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:51.902178   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:51.902184   59960 cri.go:89] found id: ""
	I1126 20:09:51.902191   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:51.902244   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.906099   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.909550   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:51.909622   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:51.940956   59960 cri.go:89] found id: ""
	I1126 20:09:51.940984   59960 logs.go:282] 0 containers: []
	W1126 20:09:51.940993   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:51.941000   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:51.941057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:51.967086   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:51.967112   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:51.967117   59960 cri.go:89] found id: ""
	I1126 20:09:51.967125   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:51.967206   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.970992   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:51.974344   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:51.974463   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:52.006654   59960 cri.go:89] found id: ""
	I1126 20:09:52.006675   59960 logs.go:282] 0 containers: []
	W1126 20:09:52.006684   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:52.006693   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:52.006705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:52.033587   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:52.033621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:52.062777   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:52.062810   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:52.136250   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:52.127112    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.127989    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.129548    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.130437    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.132317    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:52.127112    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.127989    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.129548    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.130437    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:52.132317    3069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:52.136279   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:52.136292   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:52.165716   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:52.165792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:52.210120   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:52.210157   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:52.266182   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:52.266228   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:52.296704   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:52.296732   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:52.373394   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:52.373432   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:52.409405   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:52.409436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:52.508717   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:52.508755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:52.520510   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:52.520577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.069988   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:55.081385   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:55.081477   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:55.109272   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:55.109297   59960 cri.go:89] found id: ""
	I1126 20:09:55.109306   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:55.109393   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.113332   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:55.113409   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:55.144644   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.144728   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:55.144749   59960 cri.go:89] found id: ""
	I1126 20:09:55.144782   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:55.144860   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.148962   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.153598   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:55.153724   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:55.180168   59960 cri.go:89] found id: ""
	I1126 20:09:55.180235   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.180274   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:55.180302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:55.180378   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:55.207578   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:55.207606   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:55.207611   59960 cri.go:89] found id: ""
	I1126 20:09:55.207621   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:55.207698   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.211665   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.215295   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:55.215371   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:55.243201   59960 cri.go:89] found id: ""
	I1126 20:09:55.243228   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.243237   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:55.243243   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:55.243299   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:55.273345   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:55.273370   59960 cri.go:89] found id: "7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:55.273375   59960 cri.go:89] found id: ""
	I1126 20:09:55.273382   59960 logs.go:282] 2 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f]
	I1126 20:09:55.273434   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.277156   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:55.280557   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:55.280629   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:55.306973   59960 cri.go:89] found id: ""
	I1126 20:09:55.307037   59960 logs.go:282] 0 containers: []
	W1126 20:09:55.307052   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:55.307061   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:55.307072   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:55.405440   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:55.405474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:55.418598   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:55.418628   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:55.487261   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:55.479261    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.479915    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481393    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481846    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.483618    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:55.479261    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.479915    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481393    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.481846    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:55.483618    3202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:55.487286   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:55.487299   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:55.531555   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:55.531626   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:55.601020   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:55.601057   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:55.632319   59960 logs.go:123] Gathering logs for kube-controller-manager [7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f] ...
	I1126 20:09:55.632347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7265a1863deba220803b023ae281c19e30b2afb00cffffdf24d8581cd818c53f"
	I1126 20:09:55.660851   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:55.660881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:55.742963   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:55.742998   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:55.773047   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:55.773076   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:55.826960   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:55.826991   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:55.855917   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:55.855944   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:09:58.399772   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:09:58.415975   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:09:58.416043   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:09:58.442760   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:58.442782   59960 cri.go:89] found id: ""
	I1126 20:09:58.442792   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:09:58.442850   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.446527   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:09:58.446620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:09:58.476049   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:58.476071   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:58.476076   59960 cri.go:89] found id: ""
	I1126 20:09:58.476084   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:09:58.476141   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.480019   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.483716   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:09:58.483799   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:09:58.514116   59960 cri.go:89] found id: ""
	I1126 20:09:58.514138   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.514147   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:09:58.514153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:09:58.514220   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:09:58.547211   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:58.547233   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:58.547239   59960 cri.go:89] found id: ""
	I1126 20:09:58.547257   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:09:58.547342   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.551299   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.554848   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:09:58.554921   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:09:58.583768   59960 cri.go:89] found id: ""
	I1126 20:09:58.583793   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.583802   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:09:58.583809   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:09:58.583865   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:09:58.611601   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:58.611635   59960 cri.go:89] found id: ""
	I1126 20:09:58.611644   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:09:58.611703   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:09:58.615732   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:09:58.615802   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:09:58.646048   59960 cri.go:89] found id: ""
	I1126 20:09:58.646087   59960 logs.go:282] 0 containers: []
	W1126 20:09:58.646096   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:09:58.646106   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:09:58.646135   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:09:58.745296   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:09:58.745332   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:09:58.820265   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:09:58.811642    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.812262    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.813785    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.814448    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.815924    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:09:58.811642    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.812262    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.813785    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.814448    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:09:58.815924    3345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:09:58.820294   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:09:58.820308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:09:58.877523   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:09:58.877556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:09:58.904630   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:09:58.904656   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:09:58.980105   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:09:58.980138   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:09:58.992220   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:09:58.992248   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:09:59.019086   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:09:59.019112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:09:59.058229   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:09:59.058260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:09:59.106394   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:09:59.106427   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:09:59.134445   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:09:59.134474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:01.667677   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:01.679153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:01.679227   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:01.713101   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:01.713122   59960 cri.go:89] found id: ""
	I1126 20:10:01.713130   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:01.713185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.717042   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:01.717117   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:01.748792   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:01.748817   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:01.748823   59960 cri.go:89] found id: ""
	I1126 20:10:01.748832   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:01.748889   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.752752   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.756411   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:01.756487   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:01.785898   59960 cri.go:89] found id: ""
	I1126 20:10:01.785954   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.785964   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:01.785971   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:01.786033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:01.817470   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:01.817496   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:01.817502   59960 cri.go:89] found id: ""
	I1126 20:10:01.817509   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:01.817567   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.821688   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.826052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:01.826203   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:01.856542   59960 cri.go:89] found id: ""
	I1126 20:10:01.856568   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.856590   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:01.856620   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:01.856742   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:01.893138   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:01.893218   59960 cri.go:89] found id: ""
	I1126 20:10:01.893242   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:01.893337   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:01.897863   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:01.898026   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:01.935921   59960 cri.go:89] found id: ""
	I1126 20:10:01.935951   59960 logs.go:282] 0 containers: []
	W1126 20:10:01.935961   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:01.935971   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:01.935985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:01.973303   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:01.973332   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:02.028454   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:02.028493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:02.074241   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:02.074272   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:02.162898   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:02.162936   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:02.176057   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:02.176088   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:02.235629   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:02.235665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:02.306607   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:02.306643   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:02.337699   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:02.337729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:02.374553   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:02.374582   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:02.481202   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:02.481238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:02.563313   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:02.555444    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.556211    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.557668    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.558242    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.559786    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:02.555444    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.556211    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.557668    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.558242    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:02.559786    3547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:05.064305   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:05.075852   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:05.075925   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:05.108322   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:05.108345   59960 cri.go:89] found id: ""
	I1126 20:10:05.108354   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:05.108410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.112382   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:05.112460   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:05.140946   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:05.141021   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:05.141040   59960 cri.go:89] found id: ""
	I1126 20:10:05.141063   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:05.141150   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.145278   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.148898   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:05.148974   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:05.176423   59960 cri.go:89] found id: ""
	I1126 20:10:05.176450   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.176459   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:05.176466   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:05.176527   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:05.204990   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:05.205013   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:05.205018   59960 cri.go:89] found id: ""
	I1126 20:10:05.205026   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:05.205088   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.208959   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.212627   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:05.212730   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:05.239581   59960 cri.go:89] found id: ""
	I1126 20:10:05.239604   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.239614   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:05.239620   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:05.239679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:05.268087   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:05.268110   59960 cri.go:89] found id: ""
	I1126 20:10:05.268119   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:05.268176   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:05.271819   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:05.271923   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:05.298753   59960 cri.go:89] found id: ""
	I1126 20:10:05.298819   59960 logs.go:282] 0 containers: []
	W1126 20:10:05.298833   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:05.298843   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:05.298855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:05.325518   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:05.325548   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:05.376406   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:05.376438   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:05.428781   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:05.428943   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:05.459754   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:05.459786   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:05.487550   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:05.487581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:05.520035   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:05.520071   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:05.616425   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:05.616503   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:05.630189   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:05.630221   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:05.715272   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:05.705315    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.706188    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708012    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708749    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.710497    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:05.705315    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.706188    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708012    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.708749    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:05.710497    3677 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:05.715301   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:05.715315   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:05.768473   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:05.768507   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:08.349688   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:08.360619   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:08.360693   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:08.388583   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:08.388610   59960 cri.go:89] found id: ""
	I1126 20:10:08.388619   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:08.388678   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.392264   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:08.392334   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:08.418523   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:08.418549   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:08.418554   59960 cri.go:89] found id: ""
	I1126 20:10:08.418562   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:08.418621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.422368   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.425851   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:08.425954   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:08.456520   59960 cri.go:89] found id: ""
	I1126 20:10:08.456546   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.456555   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:08.456562   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:08.456620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:08.487158   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:08.487182   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:08.487186   59960 cri.go:89] found id: ""
	I1126 20:10:08.487195   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:08.487268   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.491193   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.494690   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:08.494760   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:08.523674   59960 cri.go:89] found id: ""
	I1126 20:10:08.523699   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.523708   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:08.523715   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:08.523773   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:08.569422   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:08.569442   59960 cri.go:89] found id: ""
	I1126 20:10:08.569449   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:08.569505   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:08.572997   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:08.573065   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:08.599736   59960 cri.go:89] found id: ""
	I1126 20:10:08.599763   59960 logs.go:282] 0 containers: []
	W1126 20:10:08.599772   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:08.599781   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:08.599799   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:08.674461   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:08.665974    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.666705    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.668447    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.669108    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.670766    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:08.665974    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.666705    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.668447    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.669108    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:08.670766    3757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:08.674482   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:08.674495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:08.726546   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:08.726591   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:08.783639   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:08.783690   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:08.860709   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:08.860759   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:08.873030   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:08.873058   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:08.899170   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:08.899199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:08.940773   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:08.940855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:08.969671   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:08.969762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:09.001544   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:09.001621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:09.035799   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:09.035837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:11.634159   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:11.645145   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:11.645262   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:11.684091   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:11.684113   59960 cri.go:89] found id: ""
	I1126 20:10:11.684121   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:11.684198   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.687930   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:11.688002   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:11.716342   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:11.716366   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:11.716372   59960 cri.go:89] found id: ""
	I1126 20:10:11.716380   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:11.716438   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.720592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.724106   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:11.724181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:11.750971   59960 cri.go:89] found id: ""
	I1126 20:10:11.750997   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.751007   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:11.751014   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:11.751140   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:11.778888   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:11.778912   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:11.778917   59960 cri.go:89] found id: ""
	I1126 20:10:11.778924   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:11.778979   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.782704   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.786153   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:11.786245   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:11.812859   59960 cri.go:89] found id: ""
	I1126 20:10:11.812924   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.812953   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:11.812972   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:11.813047   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:11.844995   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:11.845065   59960 cri.go:89] found id: ""
	I1126 20:10:11.845089   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:11.845159   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:11.848928   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:11.849056   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:11.878557   59960 cri.go:89] found id: ""
	I1126 20:10:11.878634   59960 logs.go:282] 0 containers: []
	W1126 20:10:11.878657   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:11.878674   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:11.878686   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:11.911996   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:11.912024   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:11.957531   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:11.957700   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:12.002561   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:12.002600   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:12.037611   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:12.037655   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:12.124659   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:12.124695   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:12.157527   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:12.157559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:12.255561   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:12.255597   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:12.270701   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:12.270727   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:12.344084   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:12.335378    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.336132    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.337729    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.338527    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.340203    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:12.335378    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.336132    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.337729    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.338527    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:12.340203    3942 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:12.344111   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:12.344127   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:12.414064   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:12.414099   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:14.957062   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:14.971279   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:14.971358   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:15.002850   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:15.002871   59960 cri.go:89] found id: ""
	I1126 20:10:15.002879   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:15.002953   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.007210   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:15.007317   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:15.044904   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:15.044929   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:15.044934   59960 cri.go:89] found id: ""
	I1126 20:10:15.044943   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:15.045037   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.050180   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.055192   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:15.055293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:15.087772   59960 cri.go:89] found id: ""
	I1126 20:10:15.087798   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.087815   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:15.087822   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:15.087883   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:15.117095   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:15.117114   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:15.117119   59960 cri.go:89] found id: ""
	I1126 20:10:15.117127   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:15.117185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.120995   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.124760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:15.124885   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:15.157854   59960 cri.go:89] found id: ""
	I1126 20:10:15.157954   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.157994   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:15.158017   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:15.158084   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:15.190383   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:15.190407   59960 cri.go:89] found id: ""
	I1126 20:10:15.190417   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:15.190474   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:15.194524   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:15.194624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:15.223311   59960 cri.go:89] found id: ""
	I1126 20:10:15.223337   59960 logs.go:282] 0 containers: []
	W1126 20:10:15.223346   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:15.223355   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:15.223366   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:15.236105   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:15.236134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:15.263408   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:15.263436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:15.308099   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:15.308133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:15.370222   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:15.370258   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:15.412978   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:15.413009   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:15.482330   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:15.473679    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.474420    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476124    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476749    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.478398    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:15.473679    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.474420    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476124    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.476749    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:15.478398    4073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:15.482403   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:15.482428   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:15.528305   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:15.528335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:15.564111   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:15.564138   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:15.592541   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:15.592569   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:15.673319   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:15.673357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:18.279646   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:18.290358   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:18.290427   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:18.319136   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:18.319159   59960 cri.go:89] found id: ""
	I1126 20:10:18.319168   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:18.319225   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.322893   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:18.322967   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:18.350092   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:18.350120   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:18.350126   59960 cri.go:89] found id: ""
	I1126 20:10:18.350139   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:18.350193   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.354777   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.358503   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:18.358602   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:18.396162   59960 cri.go:89] found id: ""
	I1126 20:10:18.396185   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.396193   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:18.396199   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:18.396262   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:18.430093   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:18.430119   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:18.430124   59960 cri.go:89] found id: ""
	I1126 20:10:18.430131   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:18.430196   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.434456   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.438374   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:18.438451   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:18.478030   59960 cri.go:89] found id: ""
	I1126 20:10:18.478058   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.478070   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:18.478076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:18.478137   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:18.506317   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:18.506340   59960 cri.go:89] found id: ""
	I1126 20:10:18.506349   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:18.506410   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:18.510476   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:18.510552   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:18.550337   59960 cri.go:89] found id: ""
	I1126 20:10:18.550408   59960 logs.go:282] 0 containers: []
	W1126 20:10:18.550436   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:18.550454   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:18.550487   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:18.621602   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:18.613602    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.614230    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.615899    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.616339    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.617881    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:18.613602    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.614230    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.615899    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.616339    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:18.617881    4172 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:18.621625   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:18.621638   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:18.648795   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:18.648824   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:18.691314   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:18.691358   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:18.771327   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:18.771367   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:18.808287   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:18.808319   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:18.907011   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:18.907048   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:18.919575   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:18.919605   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:18.961664   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:18.961697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:19.020056   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:19.020092   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:19.050179   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:19.050206   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:21.599106   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:21.611209   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:21.611309   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:21.639207   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:21.639229   59960 cri.go:89] found id: ""
	I1126 20:10:21.639238   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:21.639296   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.643290   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:21.643365   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:21.675608   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:21.675633   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:21.675639   59960 cri.go:89] found id: ""
	I1126 20:10:21.675648   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:21.675702   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.679772   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.683385   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:21.683511   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:21.719004   59960 cri.go:89] found id: ""
	I1126 20:10:21.719078   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.719102   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:21.719123   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:21.719196   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:21.745555   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:21.745634   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:21.745660   59960 cri.go:89] found id: ""
	I1126 20:10:21.745681   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:21.745750   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.750313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.753830   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:21.753907   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:21.781119   59960 cri.go:89] found id: ""
	I1126 20:10:21.781199   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.781222   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:21.781243   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:21.781347   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:21.809894   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:21.810006   59960 cri.go:89] found id: ""
	I1126 20:10:21.810022   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:21.810092   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:21.813756   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:21.813853   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:21.840725   59960 cri.go:89] found id: ""
	I1126 20:10:21.840751   59960 logs.go:282] 0 containers: []
	W1126 20:10:21.840760   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:21.840769   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:21.840781   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:21.854145   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:21.854177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:21.884873   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:21.884902   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:21.936427   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:21.936463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:21.990170   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:21.990205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:22.077016   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:22.077064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:22.106941   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:22.106974   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:22.136672   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:22.136703   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:22.235594   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:22.235630   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:22.305008   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:22.295860    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.296666    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.298548    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.299084    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.300765    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:22.295860    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.296666    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.298548    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.299084    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:22.300765    4358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:22.305032   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:22.305046   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:22.378673   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:22.378711   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:24.920612   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:24.931941   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:24.932015   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:24.958956   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:24.958979   59960 cri.go:89] found id: ""
	I1126 20:10:24.958988   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:24.959047   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.962853   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:24.962931   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:24.989108   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:24.989130   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:24.989134   59960 cri.go:89] found id: ""
	I1126 20:10:24.989141   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:24.989195   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.992756   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:24.996360   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:24.996431   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:25.023636   59960 cri.go:89] found id: ""
	I1126 20:10:25.023660   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.023670   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:25.023676   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:25.023751   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:25.056300   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:25.056325   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:25.056331   59960 cri.go:89] found id: ""
	I1126 20:10:25.056339   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:25.056407   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.060822   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.066693   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:25.066825   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:25.098171   59960 cri.go:89] found id: ""
	I1126 20:10:25.098239   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.098258   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:25.098265   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:25.098344   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:25.129634   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:25.129655   59960 cri.go:89] found id: ""
	I1126 20:10:25.129664   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:25.129759   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:25.134599   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:25.134715   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:25.166870   59960 cri.go:89] found id: ""
	I1126 20:10:25.166896   59960 logs.go:282] 0 containers: []
	W1126 20:10:25.166905   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:25.166918   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:25.166931   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:25.201303   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:25.201335   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:25.234106   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:25.234132   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:25.335293   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:25.335329   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:25.367895   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:25.367920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:25.408499   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:25.408540   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:25.489459   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:25.489496   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:25.525614   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:25.525642   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:25.540937   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:25.541079   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:25.619457   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:25.611129    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.611986    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.613567    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.614319    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.615842    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:25.611129    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.611986    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.613567    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.614319    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:25.615842    4492 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:25.619480   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:25.619494   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:25.667380   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:25.667419   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.233076   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:28.244698   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:28.244770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:28.272507   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:28.272530   59960 cri.go:89] found id: ""
	I1126 20:10:28.272538   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:28.272596   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.276257   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:28.276333   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:28.303315   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:28.303337   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:28.303342   59960 cri.go:89] found id: ""
	I1126 20:10:28.303349   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:28.303429   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.307300   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.310655   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:28.310727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:28.337118   59960 cri.go:89] found id: ""
	I1126 20:10:28.337140   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.337150   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:28.337156   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:28.337214   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:28.364328   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.364352   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:28.364358   59960 cri.go:89] found id: ""
	I1126 20:10:28.364374   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:28.364436   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.368741   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.372299   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:28.372385   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:28.398315   59960 cri.go:89] found id: ""
	I1126 20:10:28.398342   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.398351   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:28.398357   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:28.398418   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:28.426255   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:28.426276   59960 cri.go:89] found id: ""
	I1126 20:10:28.426287   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:28.426342   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:28.429863   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:28.430017   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:28.456908   59960 cri.go:89] found id: ""
	I1126 20:10:28.456933   59960 logs.go:282] 0 containers: []
	W1126 20:10:28.456942   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:28.456951   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:28.456962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:28.532783   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:28.532820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:28.637119   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:28.637160   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:28.711269   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:28.702783    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.703978    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.704633    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706176    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706692    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:28.702783    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.703978    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.704633    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706176    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:28.706692    4585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:28.711288   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:28.711304   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:28.737855   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:28.737883   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:28.789442   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:28.789477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:28.820705   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:28.820738   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:28.855530   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:28.855560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:28.868297   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:28.868324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:28.913639   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:28.913673   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:28.973350   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:28.973386   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:31.500924   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:31.511869   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:31.511943   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:31.546414   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:31.546447   59960 cri.go:89] found id: ""
	I1126 20:10:31.546456   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:31.546559   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.550296   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:31.550368   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:31.577840   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:31.577859   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:31.577864   59960 cri.go:89] found id: ""
	I1126 20:10:31.577870   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:31.577967   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.581789   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.585352   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:31.585421   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:31.616396   59960 cri.go:89] found id: ""
	I1126 20:10:31.616419   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.616428   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:31.616435   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:31.616491   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:31.641907   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:31.641971   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:31.641977   59960 cri.go:89] found id: ""
	I1126 20:10:31.641984   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:31.642048   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.645886   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.649651   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:31.649732   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:31.682488   59960 cri.go:89] found id: ""
	I1126 20:10:31.682512   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.682521   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:31.682527   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:31.682597   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:31.713608   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:31.713632   59960 cri.go:89] found id: ""
	I1126 20:10:31.713641   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:31.713693   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:31.717274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:31.717349   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:31.750907   59960 cri.go:89] found id: ""
	I1126 20:10:31.750934   59960 logs.go:282] 0 containers: []
	W1126 20:10:31.750948   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:31.750957   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:31.750970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:31.822403   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:31.813458    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.814237    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.815876    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.816493    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.818239    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:31.813458    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.814237    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.815876    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.816493    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:31.818239    4715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:31.822425   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:31.822440   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:31.849676   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:31.849705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:31.891923   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:31.891959   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:31.944564   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:31.944608   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:32.015493   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:32.015577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:32.047447   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:32.047480   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:32.127183   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:32.127225   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:32.229734   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:32.229767   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:32.243678   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:32.243719   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:32.271264   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:32.271291   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:34.809253   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:34.819692   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:34.819817   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:34.846220   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:34.846240   59960 cri.go:89] found id: ""
	I1126 20:10:34.846248   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:34.846302   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.849960   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:34.850035   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:34.875486   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:34.875510   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:34.875515   59960 cri.go:89] found id: ""
	I1126 20:10:34.875522   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:34.875591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.879655   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.883266   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:34.883341   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:34.910257   59960 cri.go:89] found id: ""
	I1126 20:10:34.910286   59960 logs.go:282] 0 containers: []
	W1126 20:10:34.910295   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:34.910302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:34.910359   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:34.936501   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:34.936526   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:34.936531   59960 cri.go:89] found id: ""
	I1126 20:10:34.936539   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:34.936602   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.940297   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:34.943886   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:34.943960   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:34.970440   59960 cri.go:89] found id: ""
	I1126 20:10:34.970467   59960 logs.go:282] 0 containers: []
	W1126 20:10:34.970476   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:34.970482   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:34.970540   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:34.996813   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:34.996833   59960 cri.go:89] found id: ""
	I1126 20:10:34.996842   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:34.996901   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:35.000962   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:35.001030   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:35.029207   59960 cri.go:89] found id: ""
	I1126 20:10:35.029229   59960 logs.go:282] 0 containers: []
	W1126 20:10:35.029237   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:35.029247   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:35.029259   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:35.089280   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:35.089316   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:35.137518   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:35.137557   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:35.198701   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:35.198741   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:35.226526   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:35.226560   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:35.308302   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:35.308341   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:35.411713   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:35.411751   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:35.425089   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:35.425118   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:35.496500   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:35.487044    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.487890    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.489861    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.490651    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.492443    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:35.487044    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.487890    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.489861    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.490651    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:35.492443    4896 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:35.496523   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:35.496538   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:35.521713   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:35.521740   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:35.552491   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:35.552520   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:38.092147   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:38.105386   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:38.105494   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:38.134115   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:38.134183   59960 cri.go:89] found id: ""
	I1126 20:10:38.134204   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:38.134297   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.138342   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:38.138463   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:38.165373   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:38.165448   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:38.165468   59960 cri.go:89] found id: ""
	I1126 20:10:38.165492   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:38.165591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.169464   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.173100   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:38.173220   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:38.201795   59960 cri.go:89] found id: ""
	I1126 20:10:38.201818   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.201826   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:38.201836   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:38.201895   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:38.234752   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:38.234776   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:38.234782   59960 cri.go:89] found id: ""
	I1126 20:10:38.234789   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:38.234845   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.239023   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.242779   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:38.242854   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:38.271155   59960 cri.go:89] found id: ""
	I1126 20:10:38.271184   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.271193   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:38.271200   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:38.271261   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:38.298657   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:38.298682   59960 cri.go:89] found id: ""
	I1126 20:10:38.298691   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:38.298766   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:38.302858   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:38.302929   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:38.330494   59960 cri.go:89] found id: ""
	I1126 20:10:38.330520   59960 logs.go:282] 0 containers: []
	W1126 20:10:38.330529   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:38.330538   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:38.330570   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:38.356340   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:38.356374   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:38.401509   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:38.401542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:38.463681   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:38.463719   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:38.496848   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:38.496881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:38.524848   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:38.524875   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:38.607033   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:38.607098   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:38.709803   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:38.709840   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:38.722963   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:38.722995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:38.796592   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:38.787909    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.788704    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.790425    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.791012    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.792912    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:38.787909    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.788704    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.790425    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.791012    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:38.792912    5041 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:38.796617   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:38.796635   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:38.836671   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:38.836707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:41.373598   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:41.384711   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:41.384792   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:41.414012   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:41.414038   59960 cri.go:89] found id: ""
	I1126 20:10:41.414047   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:41.414103   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.417961   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:41.418036   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:41.450051   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:41.450076   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:41.450082   59960 cri.go:89] found id: ""
	I1126 20:10:41.450089   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:41.450147   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.455240   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.459174   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:41.459275   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:41.487216   59960 cri.go:89] found id: ""
	I1126 20:10:41.487241   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.487250   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:41.487257   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:41.487340   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:41.515666   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:41.515739   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:41.515751   59960 cri.go:89] found id: ""
	I1126 20:10:41.515759   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:41.515817   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.519735   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.523565   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:41.523639   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:41.554213   59960 cri.go:89] found id: ""
	I1126 20:10:41.554240   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.554250   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:41.554256   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:41.554321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:41.584766   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:41.584790   59960 cri.go:89] found id: ""
	I1126 20:10:41.584799   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:41.584861   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:41.589437   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:41.589510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:41.616610   59960 cri.go:89] found id: ""
	I1126 20:10:41.616638   59960 logs.go:282] 0 containers: []
	W1126 20:10:41.616648   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:41.616657   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:41.616669   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:41.696316   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:41.696352   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:41.765798   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:41.758434    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.758824    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760333    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760643    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.762180    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:41.758434    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.758824    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760333    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.760643    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:41.762180    5133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:41.765870   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:41.765900   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:41.791490   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:41.791517   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:41.827993   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:41.828022   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:41.854480   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:41.854511   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:41.885603   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:41.885632   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:41.984936   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:41.984970   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:41.997672   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:41.997701   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:42.039613   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:42.039668   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:42.100317   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:42.100359   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:44.745690   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:44.756208   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:44.756277   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:44.793586   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:44.793606   59960 cri.go:89] found id: ""
	I1126 20:10:44.793614   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:44.793666   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.797466   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:44.797561   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:44.823288   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:44.823313   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:44.823319   59960 cri.go:89] found id: ""
	I1126 20:10:44.823326   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:44.823383   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.828270   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.832190   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:44.832260   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:44.858643   59960 cri.go:89] found id: ""
	I1126 20:10:44.858694   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.858704   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:44.858711   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:44.858772   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:44.887625   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:44.887711   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:44.887722   59960 cri.go:89] found id: ""
	I1126 20:10:44.887730   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:44.887791   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.891593   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.895076   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:44.895151   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:44.924994   59960 cri.go:89] found id: ""
	I1126 20:10:44.925060   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.925085   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:44.925104   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:44.925196   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:44.951783   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:44.951807   59960 cri.go:89] found id: ""
	I1126 20:10:44.951816   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:44.951874   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:44.955505   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:44.955620   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:44.982789   59960 cri.go:89] found id: ""
	I1126 20:10:44.982814   59960 logs.go:282] 0 containers: []
	W1126 20:10:44.982822   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:44.982831   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:44.982843   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:45.010557   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:45.010586   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:45.141549   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:45.141632   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:45.253485   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:45.253554   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:45.353619   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:45.353660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:45.408761   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:45.408795   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:45.443664   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:45.443692   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:45.470742   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:45.470773   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:45.504515   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:45.504544   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:45.608220   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:45.608254   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:45.620732   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:45.620761   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:45.707896   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:45.695026    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.696388    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.697297    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.699791    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.700340    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:45.695026    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.696388    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.697297    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.699791    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:45.700340    5337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:48.209609   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:48.220742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:48.220811   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:48.247863   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:48.247886   59960 cri.go:89] found id: ""
	I1126 20:10:48.247894   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:48.247949   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.251929   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:48.251997   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:48.280449   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:48.280470   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:48.280475   59960 cri.go:89] found id: ""
	I1126 20:10:48.280483   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:48.280537   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.284732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.288315   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:48.288405   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:48.316409   59960 cri.go:89] found id: ""
	I1126 20:10:48.316432   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.316440   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:48.316446   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:48.316506   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:48.349208   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:48.349271   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:48.349289   59960 cri.go:89] found id: ""
	I1126 20:10:48.349316   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:48.349408   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.354353   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.357751   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:48.357848   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:48.385059   59960 cri.go:89] found id: ""
	I1126 20:10:48.385081   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.385090   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:48.385107   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:48.385185   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:48.411304   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:48.411326   59960 cri.go:89] found id: ""
	I1126 20:10:48.411334   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:48.411405   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:48.415053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:48.415156   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:48.441024   59960 cri.go:89] found id: ""
	I1126 20:10:48.441046   59960 logs.go:282] 0 containers: []
	W1126 20:10:48.441055   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:48.441063   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:48.441075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:48.469644   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:48.469672   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:48.510776   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:48.510859   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:48.592885   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:48.592917   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:48.620191   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:48.620216   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:48.715671   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:48.715746   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:48.730976   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:48.731004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:48.784446   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:48.784483   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:48.816189   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:48.816220   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:48.894569   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:48.894607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:48.934181   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:48.934214   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:49.000322   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:48.992247    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.992990    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994167    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994648    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.996101    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:48.992247    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.992990    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994167    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.994648    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:48.996101    5475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:51.500568   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:51.512500   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:51.512570   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:51.550166   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:51.550188   59960 cri.go:89] found id: ""
	I1126 20:10:51.550196   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:51.550253   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.554115   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:51.554221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:51.580857   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:51.580880   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:51.580885   59960 cri.go:89] found id: ""
	I1126 20:10:51.580893   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:51.580949   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.584903   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.588661   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:51.588730   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:51.620121   59960 cri.go:89] found id: ""
	I1126 20:10:51.620147   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.620156   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:51.620163   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:51.620225   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:51.648043   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:51.648066   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:51.648071   59960 cri.go:89] found id: ""
	I1126 20:10:51.648079   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:51.648144   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.652146   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.656590   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:51.656658   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:51.684798   59960 cri.go:89] found id: ""
	I1126 20:10:51.684825   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.684835   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:51.684842   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:51.684900   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:51.712247   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:51.712270   59960 cri.go:89] found id: ""
	I1126 20:10:51.712279   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:51.712334   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:51.716105   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:51.716235   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:51.755296   59960 cri.go:89] found id: ""
	I1126 20:10:51.755373   59960 logs.go:282] 0 containers: []
	W1126 20:10:51.755389   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:51.755400   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:51.755412   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:51.782840   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:51.782871   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:51.826403   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:51.826436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:51.894112   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:51.894148   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:51.920185   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:51.920212   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:51.993815   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:51.993856   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:52.030774   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:52.030804   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:52.112821   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:52.103396    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.104540    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.105295    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.106939    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.107489    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:52.103396    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.104540    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.105295    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.106939    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:52.107489    5587 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:52.112847   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:52.112861   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:52.161738   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:52.161771   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:52.193340   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:52.193368   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:52.291814   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:52.291862   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:54.810104   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:54.820898   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:54.820971   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:54.849431   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:54.849454   59960 cri.go:89] found id: ""
	I1126 20:10:54.849462   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:54.849524   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.853394   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:54.853465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:54.879833   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:54.879855   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:54.879860   59960 cri.go:89] found id: ""
	I1126 20:10:54.879867   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:54.879926   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.883636   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.887200   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:54.887280   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:54.913349   59960 cri.go:89] found id: ""
	I1126 20:10:54.913374   59960 logs.go:282] 0 containers: []
	W1126 20:10:54.913382   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:54.913389   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:54.913446   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:54.941189   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:54.941215   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:54.941221   59960 cri.go:89] found id: ""
	I1126 20:10:54.941229   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:54.941285   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.945133   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:54.948594   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:54.948673   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:54.977649   59960 cri.go:89] found id: ""
	I1126 20:10:54.977677   59960 logs.go:282] 0 containers: []
	W1126 20:10:54.977687   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:54.977693   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:54.977768   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:55.008912   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:55.008938   59960 cri.go:89] found id: ""
	I1126 20:10:55.008948   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:55.009005   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:55.012659   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:55.012727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:55.056313   59960 cri.go:89] found id: ""
	I1126 20:10:55.056393   59960 logs.go:282] 0 containers: []
	W1126 20:10:55.056419   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:55.056449   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:55.056478   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:55.170137   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:55.170180   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:55.194458   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:55.194489   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:55.279906   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:55.272019    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.272480    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274150    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274543    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.276078    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:55.272019    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.272480    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274150    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.274543    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:55.276078    5685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:55.279931   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:55.279945   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:55.321902   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:55.321949   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:55.351446   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:55.351474   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:55.426688   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:55.426723   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:55.463472   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:55.463501   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:55.510565   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:55.510598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:55.580501   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:55.580534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:55.614574   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:55.614602   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:10:58.162969   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:10:58.173910   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:10:58.174019   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:10:58.202329   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:58.202352   59960 cri.go:89] found id: ""
	I1126 20:10:58.202360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:10:58.202415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.206274   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:10:58.206347   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:10:58.233721   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:58.233741   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:58.233745   59960 cri.go:89] found id: ""
	I1126 20:10:58.233753   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:10:58.233811   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.237802   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.242346   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:10:58.242419   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:10:58.271013   59960 cri.go:89] found id: ""
	I1126 20:10:58.271038   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.271047   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:10:58.271053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:10:58.271109   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:10:58.298515   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:58.298538   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:58.298553   59960 cri.go:89] found id: ""
	I1126 20:10:58.298560   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:10:58.298617   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.302497   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.306172   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:10:58.306241   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:10:58.331672   59960 cri.go:89] found id: ""
	I1126 20:10:58.331698   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.331707   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:10:58.331714   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:10:58.331819   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:10:58.359197   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:58.359219   59960 cri.go:89] found id: ""
	I1126 20:10:58.359228   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:10:58.359307   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:10:58.363274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:10:58.363346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:10:58.403777   59960 cri.go:89] found id: ""
	I1126 20:10:58.403804   59960 logs.go:282] 0 containers: []
	W1126 20:10:58.403814   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:10:58.403829   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:10:58.403890   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:10:58.504667   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:10:58.504702   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:10:58.517722   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:10:58.517750   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:10:58.589740   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:10:58.581328    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.582205    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.583896    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.584218    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.585780    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:10:58.581328    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.582205    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.583896    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.584218    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:10:58.585780    5822 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:10:58.589761   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:10:58.589774   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:10:58.617621   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:10:58.617648   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:10:58.660238   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:10:58.660281   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:10:58.709585   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:10:58.709624   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:10:58.783550   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:10:58.783586   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:10:58.820181   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:10:58.820219   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:10:58.848533   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:10:58.848564   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:10:58.921350   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:10:58.921390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:01.453687   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:01.467262   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:01.467365   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:01.498662   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:01.498715   59960 cri.go:89] found id: ""
	I1126 20:11:01.498724   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:01.498785   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.504322   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:01.504445   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:01.545072   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:01.545098   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:01.545105   59960 cri.go:89] found id: ""
	I1126 20:11:01.545113   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:01.545185   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.548993   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.552685   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:01.552797   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:01.582855   59960 cri.go:89] found id: ""
	I1126 20:11:01.582881   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.582891   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:01.582897   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:01.582954   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:01.613527   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:01.613548   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:01.613553   59960 cri.go:89] found id: ""
	I1126 20:11:01.613560   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:01.613629   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.618859   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.623550   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:01.623624   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:01.660116   59960 cri.go:89] found id: ""
	I1126 20:11:01.660140   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.660149   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:01.660159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:01.660221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:01.692418   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:01.692442   59960 cri.go:89] found id: ""
	I1126 20:11:01.692450   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:01.692509   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:01.696379   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:01.696453   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:01.729407   59960 cri.go:89] found id: ""
	I1126 20:11:01.729430   59960 logs.go:282] 0 containers: []
	W1126 20:11:01.729439   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:01.729447   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:01.729463   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:01.784458   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:01.784492   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:01.872850   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:01.872886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:01.903039   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:01.903068   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:01.942057   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:01.942084   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:02.024475   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:02.024514   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:02.128096   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:02.128133   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:02.199528   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:02.191565    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.192150    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.193873    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.194411    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.195999    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:02.191565    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.192150    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.193873    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.194411    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:02.195999    5992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:02.199554   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:02.199568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:02.226949   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:02.226985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:02.270517   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:02.270555   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:02.306879   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:02.306948   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:04.822921   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:04.834951   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:04.835018   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:04.862163   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:04.862219   59960 cri.go:89] found id: ""
	I1126 20:11:04.862244   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:04.862312   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.865957   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:04.866029   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:04.895638   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:04.895658   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:04.895663   59960 cri.go:89] found id: ""
	I1126 20:11:04.895669   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:04.895722   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.899645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.903838   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:04.903909   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:04.929326   59960 cri.go:89] found id: ""
	I1126 20:11:04.929389   59960 logs.go:282] 0 containers: []
	W1126 20:11:04.929422   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:04.929442   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:04.929522   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:04.956401   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:04.956472   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:04.956491   59960 cri.go:89] found id: ""
	I1126 20:11:04.956522   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:04.956593   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.960195   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:04.963812   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:04.963930   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:04.990366   59960 cri.go:89] found id: ""
	I1126 20:11:04.990387   59960 logs.go:282] 0 containers: []
	W1126 20:11:04.990395   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:04.990402   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:04.990468   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:05.019718   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:05.019752   59960 cri.go:89] found id: ""
	I1126 20:11:05.019762   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:05.019824   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:05.023681   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:05.023779   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:05.053886   59960 cri.go:89] found id: ""
	I1126 20:11:05.053915   59960 logs.go:282] 0 containers: []
	W1126 20:11:05.053953   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:05.053963   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:05.053994   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:05.152926   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:05.152963   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:05.165506   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:05.165534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:05.194915   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:05.194945   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:05.235104   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:05.235137   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:05.285215   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:05.285247   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:05.314134   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:05.314162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:05.341007   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:05.341034   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:05.418277   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:05.418313   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:05.491273   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:05.482790    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.483758    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.485510    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.486097    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.487714    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:05.482790    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.483758    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.485510    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.486097    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:05.487714    6141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:05.491294   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:05.491308   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:05.552151   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:05.552187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:08.086064   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:08.097504   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:08.097574   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:08.126757   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:08.126780   59960 cri.go:89] found id: ""
	I1126 20:11:08.126789   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:08.126851   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.131043   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:08.131119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:08.158212   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:08.158274   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:08.158289   59960 cri.go:89] found id: ""
	I1126 20:11:08.158297   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:08.158360   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.162104   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.166980   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:08.167053   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:08.193258   59960 cri.go:89] found id: ""
	I1126 20:11:08.193290   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.193300   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:08.193307   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:08.193374   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:08.219187   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:08.219210   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:08.219216   59960 cri.go:89] found id: ""
	I1126 20:11:08.219234   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:08.219313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.223489   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.227150   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:08.227228   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:08.255318   59960 cri.go:89] found id: ""
	I1126 20:11:08.255340   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.255348   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:08.255355   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:08.255411   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:08.282171   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:08.282194   59960 cri.go:89] found id: ""
	I1126 20:11:08.282202   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:08.282273   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:08.285788   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:08.285852   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:08.315430   59960 cri.go:89] found id: ""
	I1126 20:11:08.315505   59960 logs.go:282] 0 containers: []
	W1126 20:11:08.315538   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:08.315560   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:08.315580   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:08.345199   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:08.345268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:08.441184   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:08.441220   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:08.511176   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:08.500509    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.501151    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504004    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504546    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.506870    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:08.500509    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.501151    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504004    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.504546    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:08.506870    6242 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:08.511208   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:08.511222   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:08.543421   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:08.543450   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:08.604175   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:08.604207   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:08.632557   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:08.632623   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:08.663480   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:08.663506   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:08.675096   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:08.675127   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:08.713968   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:08.713998   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:08.759141   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:08.759176   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:11.351574   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:11.361875   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:11.361972   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:11.388446   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:11.388515   59960 cri.go:89] found id: ""
	I1126 20:11:11.388529   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:11.388594   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.392093   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:11.392176   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:11.421855   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:11.421875   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:11.421880   59960 cri.go:89] found id: ""
	I1126 20:11:11.421887   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:11.421974   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.425675   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.429670   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:11.429770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:11.455248   59960 cri.go:89] found id: ""
	I1126 20:11:11.455272   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.455280   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:11.455287   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:11.455349   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:11.481734   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:11.481755   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:11.481761   59960 cri.go:89] found id: ""
	I1126 20:11:11.481769   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:11.481841   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.485836   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.489303   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:11.489380   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:11.521985   59960 cri.go:89] found id: ""
	I1126 20:11:11.522011   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.522020   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:11.522036   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:11.522095   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:11.561668   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:11.561700   59960 cri.go:89] found id: ""
	I1126 20:11:11.561708   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:11.561772   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:11.565986   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:11.566063   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:11.594364   59960 cri.go:89] found id: ""
	I1126 20:11:11.594386   59960 logs.go:282] 0 containers: []
	W1126 20:11:11.594395   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:11.594404   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:11.594440   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:11.639020   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:11.639057   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:11.709026   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:11.709063   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:11.739742   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:11.739771   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:11.806014   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:11.797164    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798194    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798970    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.800645    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.801154    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:11.797164    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798194    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.798970    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.800645    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:11.801154    6392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:11.806036   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:11.806048   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:11.844958   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:11.844991   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:11.876607   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:11.876634   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:11.911651   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:11.911677   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:11.991136   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:11.991170   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:12.094606   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:12.094650   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:12.107579   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:12.107609   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:14.637133   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:14.648286   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:14.648355   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:14.678404   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:14.678427   59960 cri.go:89] found id: ""
	I1126 20:11:14.678435   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:14.678495   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.682257   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:14.682330   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:14.713744   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:14.713765   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:14.713770   59960 cri.go:89] found id: ""
	I1126 20:11:14.713777   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:14.713835   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.718000   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.721792   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:14.721916   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:14.753701   59960 cri.go:89] found id: ""
	I1126 20:11:14.753767   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.753793   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:14.753812   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:14.753951   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:14.782584   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:14.782609   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:14.782615   59960 cri.go:89] found id: ""
	I1126 20:11:14.782622   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:14.782679   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.786288   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.790091   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:14.790165   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:14.816545   59960 cri.go:89] found id: ""
	I1126 20:11:14.816570   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.816579   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:14.816586   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:14.816642   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:14.846080   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:14.846100   59960 cri.go:89] found id: ""
	I1126 20:11:14.846108   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:14.846166   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:14.849789   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:14.849880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:14.876460   59960 cri.go:89] found id: ""
	I1126 20:11:14.876491   59960 logs.go:282] 0 containers: []
	W1126 20:11:14.876500   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:14.876508   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:14.876518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:14.951236   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:14.951274   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:14.983322   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:14.983350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:15.061107   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:15.051102    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.052170    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.053243    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.054378    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.056334    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:15.051102    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.052170    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.053243    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.054378    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:15.056334    6513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:15.061129   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:15.061144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:15.097557   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:15.097587   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:15.138293   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:15.138326   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:15.168503   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:15.168532   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:15.267115   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:15.267150   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:15.279584   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:15.279615   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:15.326150   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:15.326184   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:15.389193   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:15.389226   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:17.918406   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:17.929053   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:17.929122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:17.953884   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:17.953945   59960 cri.go:89] found id: ""
	I1126 20:11:17.953954   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:17.954015   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.957395   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:17.957465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:17.983711   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:17.983731   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:17.983735   59960 cri.go:89] found id: ""
	I1126 20:11:17.983742   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:17.983795   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.987660   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:17.991154   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:17.991224   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:18.019969   59960 cri.go:89] found id: ""
	I1126 20:11:18.019998   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.020008   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:18.020015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:18.020073   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:18.061149   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:18.061172   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:18.061178   59960 cri.go:89] found id: ""
	I1126 20:11:18.061186   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:18.061246   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.065578   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.069815   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:18.069885   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:18.096457   59960 cri.go:89] found id: ""
	I1126 20:11:18.096479   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.096487   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:18.096494   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:18.096554   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:18.124303   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:18.124367   59960 cri.go:89] found id: ""
	I1126 20:11:18.124392   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:18.124471   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:18.130707   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:18.130839   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:18.156714   59960 cri.go:89] found id: ""
	I1126 20:11:18.156740   59960 logs.go:282] 0 containers: []
	W1126 20:11:18.156750   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:18.156759   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:18.156773   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:18.233800   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:18.233837   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:18.264943   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:18.264973   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:18.343435   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:18.335872    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.336444    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.337906    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.338530    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.339816    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:18.335872    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.336444    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.337906    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.338530    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:18.339816    6652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:18.343458   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:18.343470   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:18.372998   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:18.373026   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:18.416461   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:18.416495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:18.445233   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:18.445263   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:18.545748   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:18.545787   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:18.557806   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:18.557835   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:18.622509   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:18.622542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:18.707610   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:18.707689   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:21.236452   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:21.247662   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:21.247729   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:21.276004   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:21.276030   59960 cri.go:89] found id: ""
	I1126 20:11:21.276038   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:21.276125   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.279851   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:21.279945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:21.309267   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:21.309291   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:21.309297   59960 cri.go:89] found id: ""
	I1126 20:11:21.309304   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:21.309359   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.313384   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.317026   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:21.317099   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:21.347773   59960 cri.go:89] found id: ""
	I1126 20:11:21.347799   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.347807   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:21.347817   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:21.347901   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:21.389878   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:21.389898   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:21.389902   59960 cri.go:89] found id: ""
	I1126 20:11:21.389910   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:21.390028   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.396218   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.405704   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:21.405823   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:21.458505   59960 cri.go:89] found id: ""
	I1126 20:11:21.458573   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.458605   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:21.458635   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:21.458731   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:21.486896   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:21.486961   59960 cri.go:89] found id: ""
	I1126 20:11:21.486983   59960 logs.go:282] 1 containers: [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:21.487052   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:21.490729   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:21.490845   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:21.521776   59960 cri.go:89] found id: ""
	I1126 20:11:21.521798   59960 logs.go:282] 0 containers: []
	W1126 20:11:21.521806   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:21.521815   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:21.521827   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:21.540126   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:21.540201   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:21.612034   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:21.604355    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.605075    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.606757    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.607410    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.608381    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:21.604355    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.605075    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.606757    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.607410    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:21.608381    6776 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:21.612058   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:21.612072   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:21.658622   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:21.658657   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:21.707807   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:21.707844   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:21.769271   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:21.769306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:21.801295   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:21.801325   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:21.896605   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:21.896639   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:21.929176   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:21.929205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:21.967857   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:21.967884   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:22.001350   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:22.001375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:24.595423   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:24.606910   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:24.606980   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:24.638795   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:24.638819   59960 cri.go:89] found id: ""
	I1126 20:11:24.638827   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:24.638885   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.642601   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:24.642677   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:24.709965   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:24.709984   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:24.709989   59960 cri.go:89] found id: ""
	I1126 20:11:24.709996   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:24.710075   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.714848   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.719509   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:24.719668   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:24.756426   59960 cri.go:89] found id: ""
	I1126 20:11:24.756497   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.756521   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:24.756540   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:24.756658   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:24.803189   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:24.803256   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:24.803274   59960 cri.go:89] found id: ""
	I1126 20:11:24.803295   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:24.803379   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.808196   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.812071   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:24.812194   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:24.852305   59960 cri.go:89] found id: ""
	I1126 20:11:24.852378   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.852408   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:24.852429   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:24.852520   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:24.889194   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:24.889263   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:24.889294   59960 cri.go:89] found id: ""
	I1126 20:11:24.889320   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:24.889413   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.893347   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:24.897224   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:24.897334   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:24.930230   59960 cri.go:89] found id: ""
	I1126 20:11:24.930304   59960 logs.go:282] 0 containers: []
	W1126 20:11:24.930333   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:24.930344   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:24.930371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:25.035563   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:25.035604   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:25.054082   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:25.054112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:25.096053   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:25.096081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:25.145970   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:25.146007   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:25.185648   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:25.185678   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:25.214168   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:25.214199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:25.247077   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:25.247106   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:25.338812   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:25.330325    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.331301    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.332972    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.333487    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.335076    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:25.330325    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.331301    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.332972    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.333487    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:25.335076    6966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:25.338839   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:25.338854   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:25.379564   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:25.379600   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:25.447694   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:25.447730   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:25.472568   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:25.472598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:28.058550   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:28.076007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:28.076082   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:28.106329   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:28.106351   59960 cri.go:89] found id: ""
	I1126 20:11:28.106360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:28.106418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.110514   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:28.110591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:28.140757   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:28.140777   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:28.140782   59960 cri.go:89] found id: ""
	I1126 20:11:28.140789   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:28.140842   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.144844   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.148401   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:28.148473   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:28.174921   59960 cri.go:89] found id: ""
	I1126 20:11:28.174944   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.174953   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:28.174959   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:28.175022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:28.202405   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:28.202425   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:28.202429   59960 cri.go:89] found id: ""
	I1126 20:11:28.202436   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:28.202491   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.207455   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.211480   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:28.211548   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:28.239676   59960 cri.go:89] found id: ""
	I1126 20:11:28.239749   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.239773   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:28.239793   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:28.239857   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:28.269256   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:28.269277   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:28.269282   59960 cri.go:89] found id: ""
	I1126 20:11:28.269289   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:28.269344   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.273004   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:28.276329   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:28.276398   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:28.302206   59960 cri.go:89] found id: ""
	I1126 20:11:28.302272   59960 logs.go:282] 0 containers: []
	W1126 20:11:28.302298   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:28.302321   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:28.302363   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:28.332034   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:28.332062   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:28.376567   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:28.376603   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:28.441530   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:28.441568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:28.468188   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:28.468219   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:28.544745   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:28.544780   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:28.590841   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:28.590870   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:28.603163   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:28.603194   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:28.675368   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:28.666467    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.667143    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.668892    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.669848    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.671529    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:28.666467    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.667143    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.668892    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.669848    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:28.671529    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:28.675390   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:28.675403   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:28.716129   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:28.716160   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:28.746889   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:28.746916   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:28.784649   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:28.784678   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:31.386032   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:31.396663   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:31.396729   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:31.424252   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:31.424274   59960 cri.go:89] found id: ""
	I1126 20:11:31.424282   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:31.424337   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.427909   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:31.427983   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:31.459053   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:31.459075   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:31.459080   59960 cri.go:89] found id: ""
	I1126 20:11:31.459088   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:31.459148   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.462802   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.466564   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:31.466687   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:31.497981   59960 cri.go:89] found id: ""
	I1126 20:11:31.498003   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.498012   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:31.498018   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:31.498110   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:31.526027   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:31.526052   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:31.526057   59960 cri.go:89] found id: ""
	I1126 20:11:31.526065   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:31.526170   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.529987   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.534855   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:31.534945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:31.563109   59960 cri.go:89] found id: ""
	I1126 20:11:31.563169   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.563198   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:31.563219   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:31.563293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:31.589243   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:31.589265   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:31.589270   59960 cri.go:89] found id: ""
	I1126 20:11:31.589278   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:31.589354   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.593459   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:31.596946   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:31.597021   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:31.623525   59960 cri.go:89] found id: ""
	I1126 20:11:31.623558   59960 logs.go:282] 0 containers: []
	W1126 20:11:31.623567   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:31.623576   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:31.623587   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:31.652294   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:31.652373   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:31.735258   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:31.735294   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:31.768608   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:31.768683   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:31.870428   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:31.870508   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:31.897014   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:31.897042   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:32.001263   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:32.001299   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:32.038474   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:32.038514   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:32.052890   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:32.052925   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:32.157895   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:32.150135    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.150798    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152292    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152811    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.154388    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:32.150135    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.150798    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152292    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.152811    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:32.154388    7260 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:32.157991   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:32.158015   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:32.202276   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:32.202312   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:32.246886   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:32.246920   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:34.774920   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:34.785509   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:34.785619   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:34.817587   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:34.817656   59960 cri.go:89] found id: ""
	I1126 20:11:34.817682   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:34.817753   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.821524   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:34.821594   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:34.849130   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:34.849154   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:34.849159   59960 cri.go:89] found id: ""
	I1126 20:11:34.849167   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:34.849233   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.852945   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.856601   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:34.856684   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:34.883375   59960 cri.go:89] found id: ""
	I1126 20:11:34.883398   59960 logs.go:282] 0 containers: []
	W1126 20:11:34.883412   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:34.883450   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:34.883524   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:34.909798   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:34.909821   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:34.909826   59960 cri.go:89] found id: ""
	I1126 20:11:34.909834   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:34.909888   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.913552   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.916964   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:34.917033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:34.949567   59960 cri.go:89] found id: ""
	I1126 20:11:34.949592   59960 logs.go:282] 0 containers: []
	W1126 20:11:34.949601   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:34.949608   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:34.949663   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:34.977128   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:34.977150   59960 cri.go:89] found id: "2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:34.977156   59960 cri.go:89] found id: ""
	I1126 20:11:34.977163   59960 logs.go:282] 2 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed]
	I1126 20:11:34.977220   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.981001   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:34.984842   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:34.984957   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:35.012427   59960 cri.go:89] found id: ""
	I1126 20:11:35.012460   59960 logs.go:282] 0 containers: []
	W1126 20:11:35.012470   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:35.012479   59960 logs.go:123] Gathering logs for kube-controller-manager [2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed] ...
	I1126 20:11:35.012493   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2d543d941af1990a9a32a729e5c28a0c11ebc07b1265ba780a51542a81b743ed"
	I1126 20:11:35.040355   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:35.040396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:35.085028   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:35.085064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:35.113614   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:35.113649   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:35.153880   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:35.153911   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:35.198643   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:35.198675   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:35.268315   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:35.268350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:35.295776   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:35.295804   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:35.376804   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:35.376847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:35.482429   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:35.482467   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:35.495585   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:35.495620   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:35.570301   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:35.562818    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.563633    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565195    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565472    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.566934    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:35.562818    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.563633    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565195    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.565472    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:35.566934    7422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:35.570323   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:35.570336   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.104089   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:38.117181   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:38.117256   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:38.149986   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:38.150007   59960 cri.go:89] found id: ""
	I1126 20:11:38.150015   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:38.150071   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.153769   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:38.153836   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:38.181424   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:38.181445   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:38.181450   59960 cri.go:89] found id: ""
	I1126 20:11:38.181457   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:38.181514   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.186065   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.189965   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:38.190088   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:38.222377   59960 cri.go:89] found id: ""
	I1126 20:11:38.222403   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.222412   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:38.222418   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:38.222512   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:38.251289   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:38.251308   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:38.251312   59960 cri.go:89] found id: ""
	I1126 20:11:38.251319   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:38.251376   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.256455   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.260117   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:38.260191   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:38.285970   59960 cri.go:89] found id: ""
	I1126 20:11:38.285993   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.286001   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:38.286007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:38.286071   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:38.316333   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.316352   59960 cri.go:89] found id: ""
	I1126 20:11:38.316360   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:38.316418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:38.320056   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:38.320141   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:38.346321   59960 cri.go:89] found id: ""
	I1126 20:11:38.346343   59960 logs.go:282] 0 containers: []
	W1126 20:11:38.346355   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:38.346365   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:38.346377   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:38.373397   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:38.373424   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:38.425362   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:38.425395   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:38.453015   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:38.453091   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:38.532623   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:38.532697   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:38.633361   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:38.633397   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:38.645846   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:38.645873   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:38.703411   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:38.703444   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:38.767512   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:38.767547   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:38.796976   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:38.797004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:38.829009   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:38.829038   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:38.898466   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:38.890004    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.890695    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892444    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892921    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.894201    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:38.890004    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.890695    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892444    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.892921    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:38.894201    7575 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:41.398722   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:41.410132   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:41.410201   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:41.438116   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:41.438139   59960 cri.go:89] found id: ""
	I1126 20:11:41.438148   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:41.438205   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.442017   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:41.442090   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:41.469903   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:41.469958   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:41.469963   59960 cri.go:89] found id: ""
	I1126 20:11:41.469970   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:41.470027   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.474067   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.478045   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:41.478121   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:41.505356   59960 cri.go:89] found id: ""
	I1126 20:11:41.505421   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.505446   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:41.505473   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:41.505547   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:41.539013   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:41.539078   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:41.539097   59960 cri.go:89] found id: ""
	I1126 20:11:41.539120   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:41.539192   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.545082   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.548706   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:41.548780   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:41.575834   59960 cri.go:89] found id: ""
	I1126 20:11:41.575859   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.575867   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:41.575874   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:41.575934   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:41.611347   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:41.611373   59960 cri.go:89] found id: ""
	I1126 20:11:41.611381   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:41.611452   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:41.615789   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:41.615865   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:41.641022   59960 cri.go:89] found id: ""
	I1126 20:11:41.641047   59960 logs.go:282] 0 containers: []
	W1126 20:11:41.641057   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:41.641066   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:41.641078   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:41.742347   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:41.742381   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:41.754134   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:41.754164   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:41.831601   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:41.821574    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.822287    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.823756    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.824699    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.826433    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:41.821574    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.822287    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.823756    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.824699    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:41.826433    7650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:41.831624   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:41.831637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:41.860096   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:41.860125   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:41.910250   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:41.910285   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:41.980123   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:41.980161   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:42.010802   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:42.010829   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:42.106028   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:42.106070   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:42.164514   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:42.164559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:42.271103   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:42.271151   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:44.839838   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:44.850546   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:44.850618   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:44.876918   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:44.876988   59960 cri.go:89] found id: ""
	I1126 20:11:44.877011   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:44.877094   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.881043   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:44.881125   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:44.911219   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:44.911239   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:44.911243   59960 cri.go:89] found id: ""
	I1126 20:11:44.911250   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:44.911304   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.914984   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.918517   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:44.918591   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:44.948367   59960 cri.go:89] found id: ""
	I1126 20:11:44.948393   59960 logs.go:282] 0 containers: []
	W1126 20:11:44.948403   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:44.948410   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:44.948488   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:44.979725   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:44.979749   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:44.979762   59960 cri.go:89] found id: ""
	I1126 20:11:44.979770   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:44.979825   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.983672   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:44.987318   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:44.987393   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:45.013302   59960 cri.go:89] found id: ""
	I1126 20:11:45.013326   59960 logs.go:282] 0 containers: []
	W1126 20:11:45.013335   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:45.013342   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:45.013400   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:45.055627   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:45.055649   59960 cri.go:89] found id: ""
	I1126 20:11:45.055657   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:45.055726   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:45.085558   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:45.085645   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:45.151023   59960 cri.go:89] found id: ""
	I1126 20:11:45.151097   59960 logs.go:282] 0 containers: []
	W1126 20:11:45.151125   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:45.151149   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:45.151189   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:45.299197   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:45.299495   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:45.414522   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:45.414561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:45.426305   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:45.426334   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:45.498361   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:45.490138    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.490855    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.492369    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.493032    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.494581    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:45.490138    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.490855    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.492369    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.493032    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:45.494581    7787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:45.498385   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:45.498406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:45.544282   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:45.544315   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:45.572601   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:45.572628   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:45.618675   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:45.618704   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:45.644699   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:45.644729   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:45.692766   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:45.692847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:45.768264   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:45.768298   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.298071   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:48.309786   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:48.309955   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:48.338906   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:48.338929   59960 cri.go:89] found id: ""
	I1126 20:11:48.338938   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:48.339013   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.342703   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:48.342807   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:48.373459   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:48.373483   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:48.373489   59960 cri.go:89] found id: ""
	I1126 20:11:48.373497   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:48.373571   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.377243   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.380907   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:48.380978   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:48.410171   59960 cri.go:89] found id: ""
	I1126 20:11:48.410194   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.410203   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:48.410210   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:48.410269   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:48.438118   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:48.438141   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.438146   59960 cri.go:89] found id: ""
	I1126 20:11:48.438153   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:48.438208   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.441706   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.445239   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:48.445331   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:48.471795   59960 cri.go:89] found id: ""
	I1126 20:11:48.471818   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.471827   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:48.471834   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:48.471894   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:48.499373   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:48.499444   59960 cri.go:89] found id: ""
	I1126 20:11:48.499459   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:48.499520   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:48.503413   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:48.503486   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:48.530399   59960 cri.go:89] found id: ""
	I1126 20:11:48.530421   59960 logs.go:282] 0 containers: []
	W1126 20:11:48.530435   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:48.530450   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:48.530464   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:48.571849   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:48.571882   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:48.658179   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:48.658279   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:48.689018   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:48.689045   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:48.763174   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:48.763207   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:48.778567   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:48.778596   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:48.827328   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:48.827365   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:48.857288   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:48.857365   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:48.888507   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:48.888539   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:48.988930   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:48.988967   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:49.069225   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:49.055449    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.056233    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.057886    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.058530    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.060083    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:49.055449    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.056233    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.057886    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.058530    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:49.060083    7978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:49.069248   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:49.069262   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:51.595258   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:51.606745   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:51.606819   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:51.636395   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:51.636416   59960 cri.go:89] found id: ""
	I1126 20:11:51.636430   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:51.636488   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.640040   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:51.640115   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:51.676792   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:51.676812   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:51.676816   59960 cri.go:89] found id: ""
	I1126 20:11:51.676824   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:51.676877   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.681110   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.685068   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:51.685183   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:51.720013   59960 cri.go:89] found id: ""
	I1126 20:11:51.720038   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.720047   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:51.720054   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:51.720111   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:51.748336   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:51.748360   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:51.748375   59960 cri.go:89] found id: ""
	I1126 20:11:51.748383   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:51.748439   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.752267   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.756170   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:51.756241   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:51.783057   59960 cri.go:89] found id: ""
	I1126 20:11:51.783086   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.783095   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:51.783101   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:51.783163   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:51.811250   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:51.811272   59960 cri.go:89] found id: ""
	I1126 20:11:51.811282   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:51.811338   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:51.815120   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:51.815232   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:51.846026   59960 cri.go:89] found id: ""
	I1126 20:11:51.846049   59960 logs.go:282] 0 containers: []
	W1126 20:11:51.846064   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:51.846074   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:51.846086   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:51.890348   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:51.890380   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:51.920851   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:51.920922   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:51.977107   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:51.977140   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:52.060932   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:52.060981   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:52.093050   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:52.093078   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:52.176431   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:52.176468   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:52.215980   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:52.216012   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:52.327858   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:52.327901   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:52.340252   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:52.340285   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:52.418993   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:52.410090    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.410776    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.412508    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.413095    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.414685    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:52.410090    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.410776    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.412508    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.413095    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:52.414685    8112 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:52.419016   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:52.419029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:54.944539   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:54.955542   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:54.955615   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:54.986048   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:54.986074   59960 cri.go:89] found id: ""
	I1126 20:11:54.986083   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:54.986139   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:54.989757   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:54.989829   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:55.016053   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:55.016085   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:55.016091   59960 cri.go:89] found id: ""
	I1126 20:11:55.016099   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:55.016174   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.019787   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.023250   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:55.023321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:55.069450   59960 cri.go:89] found id: ""
	I1126 20:11:55.069473   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.069482   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:55.069489   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:55.069572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:55.098641   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:55.098664   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:55.098669   59960 cri.go:89] found id: ""
	I1126 20:11:55.098676   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:55.098732   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.102435   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.106227   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:55.106351   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:55.138121   59960 cri.go:89] found id: ""
	I1126 20:11:55.138145   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.138154   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:55.138174   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:55.138236   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:55.167513   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:55.167544   59960 cri.go:89] found id: ""
	I1126 20:11:55.167553   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:55.167618   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:55.171313   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:55.171381   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:55.202786   59960 cri.go:89] found id: ""
	I1126 20:11:55.202813   59960 logs.go:282] 0 containers: []
	W1126 20:11:55.202822   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:55.202832   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:55.202866   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:55.302444   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:55.302521   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:55.340281   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:55.340307   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:55.380642   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:55.380671   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:55.413529   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:55.413559   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:55.441562   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:55.441590   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:11:55.518521   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:55.518561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:55.558444   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:55.558478   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:55.571280   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:55.571312   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:55.640808   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:55.631279    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.631827    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.633724    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.634294    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.636622    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:55.631279    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.631827    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.633724    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.634294    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:55.636622    8240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:55.640840   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:55.640855   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:55.687489   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:55.687525   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.274871   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:11:58.285429   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:11:58.285499   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:11:58.313375   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:58.313399   59960 cri.go:89] found id: ""
	I1126 20:11:58.313406   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:11:58.313459   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.316973   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:11:58.317046   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:11:58.343195   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:58.343222   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:58.343233   59960 cri.go:89] found id: ""
	I1126 20:11:58.343241   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:11:58.343299   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.346903   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.350464   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:11:58.350532   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:11:58.389630   59960 cri.go:89] found id: ""
	I1126 20:11:58.389651   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.389659   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:11:58.389666   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:11:58.389727   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:11:58.417327   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.417347   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:58.417351   59960 cri.go:89] found id: ""
	I1126 20:11:58.417358   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:11:58.417415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.421999   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.425800   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:11:58.425864   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:11:58.452945   59960 cri.go:89] found id: ""
	I1126 20:11:58.452969   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.452977   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:11:58.452983   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:11:58.453043   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:11:58.488167   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:58.488198   59960 cri.go:89] found id: ""
	I1126 20:11:58.488207   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:11:58.488290   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:11:58.492158   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:11:58.492254   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:11:58.519792   59960 cri.go:89] found id: ""
	I1126 20:11:58.519815   59960 logs.go:282] 0 containers: []
	W1126 20:11:58.519824   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:11:58.519833   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:11:58.519845   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:11:58.539152   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:11:58.539178   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:11:58.611844   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:11:58.602656    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.604433    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.605264    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.606165    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.607783    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:11:58.602656    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.604433    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.605264    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.606165    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:11:58.607783    8331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:11:58.611916   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:11:58.611936   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:11:58.653684   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:11:58.653755   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:11:58.701629   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:11:58.701698   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:11:58.797678   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:11:58.797712   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:11:58.826943   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:11:58.826971   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:11:58.870347   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:11:58.870382   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:11:58.935086   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:11:58.935124   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:11:58.968825   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:11:58.968856   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:11:58.997914   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:11:58.998030   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:01.577720   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:01.589568   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:01.589642   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:01.621435   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:01.621457   59960 cri.go:89] found id: ""
	I1126 20:12:01.621466   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:01.621521   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.625557   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:01.625630   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:01.653424   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:01.653447   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:01.653452   59960 cri.go:89] found id: ""
	I1126 20:12:01.653459   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:01.653520   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.658113   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.663163   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:01.663279   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:01.690617   59960 cri.go:89] found id: ""
	I1126 20:12:01.690692   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.690707   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:01.690714   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:01.690776   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:01.721669   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:01.721691   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:01.721696   59960 cri.go:89] found id: ""
	I1126 20:12:01.721705   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:01.721760   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.725774   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.729528   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:01.729608   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:01.755428   59960 cri.go:89] found id: ""
	I1126 20:12:01.755452   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.755461   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:01.755468   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:01.755529   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:01.783818   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:01.783841   59960 cri.go:89] found id: ""
	I1126 20:12:01.783849   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:01.783905   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:01.787656   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:01.787726   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:01.815958   59960 cri.go:89] found id: ""
	I1126 20:12:01.816025   59960 logs.go:282] 0 containers: []
	W1126 20:12:01.816050   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:01.816067   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:01.816080   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:01.867560   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:01.867592   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:01.932205   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:01.932256   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:02.002408   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:02.002441   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:02.051577   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:02.051612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:02.088918   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:02.088948   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:02.168080   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:02.158735    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.159253    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162045    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162706    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.164462    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:02.158735    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.159253    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162045    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.162706    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:02.164462    8503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:02.168105   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:02.168119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:02.244385   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:02.244435   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:02.282263   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:02.282293   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:02.383774   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:02.383810   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:02.399682   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:02.399712   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:04.928429   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:04.939418   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:04.939502   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:04.967318   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:04.967344   59960 cri.go:89] found id: ""
	I1126 20:12:04.967352   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:04.967406   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:04.971172   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:04.971242   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:04.998636   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:04.998660   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:04.998666   59960 cri.go:89] found id: ""
	I1126 20:12:04.998673   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:04.998728   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.002734   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.006234   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:05.006304   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:05.031905   59960 cri.go:89] found id: ""
	I1126 20:12:05.031931   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.031948   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:05.031954   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:05.032022   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:05.062024   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:05.062047   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:05.062053   59960 cri.go:89] found id: ""
	I1126 20:12:05.062061   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:05.062119   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.066633   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.070769   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:05.070894   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:05.098088   59960 cri.go:89] found id: ""
	I1126 20:12:05.098113   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.098123   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:05.098130   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:05.098213   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:05.131371   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:05.131394   59960 cri.go:89] found id: ""
	I1126 20:12:05.131403   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:05.131477   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:05.135270   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:05.135372   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:05.162342   59960 cri.go:89] found id: ""
	I1126 20:12:05.162365   59960 logs.go:282] 0 containers: []
	W1126 20:12:05.162374   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:05.162383   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:05.162395   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:05.235501   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:05.227170    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.227750    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229253    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229720    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.231198    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:05.227170    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.227750    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229253    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.229720    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:05.231198    8598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:05.235522   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:05.235536   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:05.263102   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:05.263128   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:05.302111   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:05.302144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:05.333187   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:05.333216   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:05.359477   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:05.359505   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:05.438760   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:05.438798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:05.451777   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:05.451807   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:05.498508   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:05.498543   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:05.568808   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:05.568843   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:05.616879   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:05.616909   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:08.220414   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:08.231126   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:08.231199   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:08.258035   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:08.258105   59960 cri.go:89] found id: ""
	I1126 20:12:08.258125   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:08.258192   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.262176   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:08.262249   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:08.289710   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:08.289733   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:08.289739   59960 cri.go:89] found id: ""
	I1126 20:12:08.289750   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:08.289805   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.293485   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.297802   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:08.297880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:08.327209   59960 cri.go:89] found id: ""
	I1126 20:12:08.327234   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.327243   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:08.327263   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:08.327336   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:08.357819   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:08.357840   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:08.357845   59960 cri.go:89] found id: ""
	I1126 20:12:08.357852   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:08.357906   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.361705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.365237   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:08.365328   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:08.394319   59960 cri.go:89] found id: ""
	I1126 20:12:08.394383   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.394399   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:08.394406   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:08.394480   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:08.420463   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:08.420527   59960 cri.go:89] found id: ""
	I1126 20:12:08.420553   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:08.420638   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:08.424335   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:08.424450   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:08.452961   59960 cri.go:89] found id: ""
	I1126 20:12:08.452986   59960 logs.go:282] 0 containers: []
	W1126 20:12:08.452995   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:08.453003   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:08.453014   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:08.493988   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:08.494022   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:08.544465   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:08.544499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:08.574385   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:08.574413   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:08.586334   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:08.586371   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:08.667454   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:08.650997    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.659303    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.660307    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662037    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662374    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:08.650997    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.659303    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.660307    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662037    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:08.662374    8764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:08.667486   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:08.667499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:08.699349   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:08.699378   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:08.764949   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:08.764985   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:08.796757   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:08.796785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:08.880624   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:08.880660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:08.914640   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:08.914667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:11.513808   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:11.524482   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:11.524580   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:11.558859   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:11.558902   59960 cri.go:89] found id: ""
	I1126 20:12:11.558911   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:11.558970   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.562673   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:11.562747   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:11.588932   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:11.588951   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:11.588956   59960 cri.go:89] found id: ""
	I1126 20:12:11.588963   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:11.589017   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.592810   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.596570   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:11.596643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:11.623065   59960 cri.go:89] found id: ""
	I1126 20:12:11.623145   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.623161   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:11.623169   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:11.623229   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:11.650581   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:11.650605   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:11.650610   59960 cri.go:89] found id: ""
	I1126 20:12:11.650618   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:11.650671   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.655559   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.659747   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:11.659817   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:11.687296   59960 cri.go:89] found id: ""
	I1126 20:12:11.687322   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.687331   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:11.687337   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:11.687396   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:11.720511   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:11.720579   59960 cri.go:89] found id: ""
	I1126 20:12:11.720617   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:11.720708   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:11.724437   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:11.724506   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:11.749548   59960 cri.go:89] found id: ""
	I1126 20:12:11.749582   59960 logs.go:282] 0 containers: []
	W1126 20:12:11.749591   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:11.749601   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:11.749612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:11.844417   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:11.844451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:11.856841   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:11.856870   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:11.927039   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:11.919031    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.919434    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921013    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921770    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.923409    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:11.919031    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.919434    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921013    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.921770    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:11.923409    8882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:11.927072   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:11.927085   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:11.952749   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:11.952778   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:11.979828   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:11.979854   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:12.054969   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:12.055007   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:12.096829   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:12.096861   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:12.139040   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:12.139073   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:12.188630   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:12.188665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:12.261491   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:12.261525   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:14.793314   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:14.805690   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:14.805792   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:14.834480   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:14.834550   59960 cri.go:89] found id: ""
	I1126 20:12:14.834563   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:14.834624   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.838451   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:14.838546   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:14.865258   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:14.865280   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:14.865288   59960 cri.go:89] found id: ""
	I1126 20:12:14.865296   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:14.865369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.869042   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.872598   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:14.872673   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:14.899453   59960 cri.go:89] found id: ""
	I1126 20:12:14.899475   59960 logs.go:282] 0 containers: []
	W1126 20:12:14.899484   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:14.899491   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:14.899553   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:14.927802   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:14.927830   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:14.927837   59960 cri.go:89] found id: ""
	I1126 20:12:14.927845   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:14.927940   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.932558   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:14.936133   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:14.936204   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:14.961102   59960 cri.go:89] found id: ""
	I1126 20:12:14.961173   59960 logs.go:282] 0 containers: []
	W1126 20:12:14.961195   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:14.961215   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:14.961302   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:15.002363   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:15.002384   59960 cri.go:89] found id: ""
	I1126 20:12:15.002393   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:15.002447   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:15.006142   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:15.006212   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:15.032134   59960 cri.go:89] found id: ""
	I1126 20:12:15.032199   59960 logs.go:282] 0 containers: []
	W1126 20:12:15.032214   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:15.032224   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:15.032240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:15.081347   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:15.081379   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:15.180623   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:15.180658   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:15.209901   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:15.209962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:15.262607   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:15.262636   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:15.288510   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:15.288544   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:15.367680   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:15.367714   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:15.412204   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:15.412231   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:15.424270   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:15.424300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:15.503073   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:15.494667    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.495283    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.496993    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.497515    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.498972    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:15.494667    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.495283    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.496993    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.497515    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:15.498972    9062 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:15.503139   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:15.503167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:15.550262   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:15.550296   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.118444   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:18.129864   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:18.129981   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:18.156819   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:18.156838   59960 cri.go:89] found id: ""
	I1126 20:12:18.156846   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:18.156904   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.161071   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:18.161149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:18.189616   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:18.189639   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:18.189644   59960 cri.go:89] found id: ""
	I1126 20:12:18.189651   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:18.189705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.193599   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.197622   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:18.197702   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:18.229000   59960 cri.go:89] found id: ""
	I1126 20:12:18.229024   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.229034   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:18.229041   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:18.229097   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:18.258704   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.258728   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:18.258734   59960 cri.go:89] found id: ""
	I1126 20:12:18.258741   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:18.258799   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.262617   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.266630   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:18.266703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:18.294498   59960 cri.go:89] found id: ""
	I1126 20:12:18.294520   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.294528   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:18.294535   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:18.294592   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:18.321461   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:18.321534   59960 cri.go:89] found id: ""
	I1126 20:12:18.321556   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:18.321645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:18.325350   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:18.325460   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:18.351492   59960 cri.go:89] found id: ""
	I1126 20:12:18.351553   59960 logs.go:282] 0 containers: []
	W1126 20:12:18.351579   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:18.351599   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:18.351637   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:18.407171   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:18.407205   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:18.439080   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:18.439112   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:18.547958   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:18.547995   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:18.619721   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:18.609846    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.610654    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612119    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612768    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.614366    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:18.609846    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.610654    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612119    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.612768    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:18.614366    9169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:18.619742   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:18.619754   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:18.645098   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:18.645177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:18.682606   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:18.682639   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:18.763422   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:18.763453   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:18.795735   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:18.795762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:18.822004   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:18.822035   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:18.896691   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:18.896727   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:21.410083   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:21.420840   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:21.420938   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:21.446994   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:21.447016   59960 cri.go:89] found id: ""
	I1126 20:12:21.447024   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:21.447102   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.450650   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:21.450721   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:21.479530   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:21.479554   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:21.479559   59960 cri.go:89] found id: ""
	I1126 20:12:21.479566   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:21.479639   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.483856   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.487301   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:21.487396   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:21.514632   59960 cri.go:89] found id: ""
	I1126 20:12:21.514655   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.514664   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:21.514677   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:21.514734   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:21.552676   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:21.552697   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:21.552701   59960 cri.go:89] found id: ""
	I1126 20:12:21.552708   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:21.552764   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.558562   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.562503   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:21.562570   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:21.592027   59960 cri.go:89] found id: ""
	I1126 20:12:21.592051   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.592059   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:21.592065   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:21.592122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:21.622050   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:21.622069   59960 cri.go:89] found id: ""
	I1126 20:12:21.622078   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:21.622133   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:21.625979   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:21.626057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:21.659506   59960 cri.go:89] found id: ""
	I1126 20:12:21.659530   59960 logs.go:282] 0 containers: []
	W1126 20:12:21.659539   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:21.659548   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:21.659561   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:21.692379   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:21.692406   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:21.765021   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:21.765055   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:21.839116   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:21.830975    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.831759    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833349    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833904    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.835476    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:21.830975    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.831759    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833349    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.833904    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:21.835476    9297 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:21.839140   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:21.839153   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:21.865386   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:21.865413   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:21.904223   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:21.904257   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:21.949513   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:21.949545   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:21.975811   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:21.975838   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:22.009804   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:22.009830   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:22.114067   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:22.114107   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:22.129823   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:22.129850   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:24.699777   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:24.710717   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:24.710835   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:24.737361   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:24.737395   59960 cri.go:89] found id: ""
	I1126 20:12:24.737404   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:24.737467   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.741100   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:24.741181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:24.766942   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:24.767005   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:24.767023   59960 cri.go:89] found id: ""
	I1126 20:12:24.767038   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:24.767117   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.771423   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.775599   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:24.775679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:24.807211   59960 cri.go:89] found id: ""
	I1126 20:12:24.807238   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.807247   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:24.807254   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:24.807313   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:24.839448   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:24.839474   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:24.839480   59960 cri.go:89] found id: ""
	I1126 20:12:24.839487   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:24.839543   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.843345   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.846785   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:24.846859   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:24.875974   59960 cri.go:89] found id: ""
	I1126 20:12:24.875999   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.876008   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:24.876015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:24.876074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:24.904623   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:24.904646   59960 cri.go:89] found id: ""
	I1126 20:12:24.904655   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:24.904729   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:24.908536   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:24.908626   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:24.937367   59960 cri.go:89] found id: ""
	I1126 20:12:24.937448   59960 logs.go:282] 0 containers: []
	W1126 20:12:24.937471   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:24.937494   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:24.937534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:24.976827   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:24.976864   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:25.024594   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:25.024629   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:25.103663   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:25.103701   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:25.184899   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:25.184934   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:25.288663   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:25.288696   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:25.303312   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:25.303340   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:25.371319   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:25.361818    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.362509    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.364256    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.365013    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.366870    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:25.361818    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.362509    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.364256    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.365013    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:25.366870    9457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:25.371342   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:25.371357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:25.399886   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:25.399954   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:25.431130   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:25.431162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:25.457679   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:25.457758   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:27.990400   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:28.001290   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:28.001359   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:28.027402   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:28.027424   59960 cri.go:89] found id: ""
	I1126 20:12:28.027441   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:28.027501   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.030992   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:28.031083   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:28.072993   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:28.073014   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:28.073019   59960 cri.go:89] found id: ""
	I1126 20:12:28.073026   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:28.073084   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.076846   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.080628   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:28.080762   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:28.107876   59960 cri.go:89] found id: ""
	I1126 20:12:28.107902   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.107911   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:28.107918   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:28.107993   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:28.135277   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:28.135299   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:28.135305   59960 cri.go:89] found id: ""
	I1126 20:12:28.135312   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:28.135369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.139340   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.143115   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:28.143193   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:28.179129   59960 cri.go:89] found id: ""
	I1126 20:12:28.179230   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.179259   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:28.179273   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:28.179346   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:28.208432   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:28.208453   59960 cri.go:89] found id: ""
	I1126 20:12:28.208465   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:28.208523   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:28.212104   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:28.212174   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:28.239214   59960 cri.go:89] found id: ""
	I1126 20:12:28.239290   59960 logs.go:282] 0 containers: []
	W1126 20:12:28.239307   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:28.239317   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:28.239331   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:28.311306   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:28.311342   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:28.340943   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:28.340972   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:28.376088   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:28.376113   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:28.447578   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:28.440425    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.440837    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442342    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442644    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.444078    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:28.440425    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.440837    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442342    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.442644    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:28.444078    9590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:28.447601   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:28.447613   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:28.494672   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:28.494707   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:28.524817   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:28.524847   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:28.611534   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:28.611568   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:28.717586   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:28.717621   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:28.729869   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:28.729894   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:28.755777   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:28.755805   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.304943   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:31.316121   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:31.316189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:31.344914   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:31.344936   59960 cri.go:89] found id: ""
	I1126 20:12:31.344945   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:31.345000   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.348636   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:31.348708   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:31.376592   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.376614   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:31.376623   59960 cri.go:89] found id: ""
	I1126 20:12:31.376630   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:31.376683   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.380757   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.384468   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:31.384545   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:31.415544   59960 cri.go:89] found id: ""
	I1126 20:12:31.415570   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.415579   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:31.415586   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:31.415646   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:31.441604   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:31.441680   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:31.441699   59960 cri.go:89] found id: ""
	I1126 20:12:31.441723   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:31.441808   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.445590   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.449159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:31.449233   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:31.475467   59960 cri.go:89] found id: ""
	I1126 20:12:31.475492   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.475501   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:31.475507   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:31.475567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:31.505974   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:31.505995   59960 cri.go:89] found id: ""
	I1126 20:12:31.506004   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:31.506068   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:31.510913   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:31.510988   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:31.555870   59960 cri.go:89] found id: ""
	I1126 20:12:31.555901   59960 logs.go:282] 0 containers: []
	W1126 20:12:31.555911   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:31.555920   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:31.555932   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:31.569317   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:31.569396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:31.639071   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:31.630335    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.631132    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.632992    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.633425    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.635012    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:31.630335    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.631132    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.632992    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.633425    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:31.635012    9706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:31.639141   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:31.639171   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:31.685122   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:31.685156   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:31.715735   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:31.715763   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:31.744469   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:31.744499   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:31.782788   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:31.782822   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:31.854784   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:31.854820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:31.883960   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:31.883989   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:31.968197   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:31.968235   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:32.000618   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:32.000646   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:34.599812   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:34.610580   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:34.610690   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:34.643812   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:34.643835   59960 cri.go:89] found id: ""
	I1126 20:12:34.643844   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:34.643902   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.647819   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:34.647891   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:34.681825   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:34.681849   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:34.681855   59960 cri.go:89] found id: ""
	I1126 20:12:34.681863   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:34.681959   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.685589   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.689208   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:34.689280   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:34.719704   59960 cri.go:89] found id: ""
	I1126 20:12:34.719727   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.719736   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:34.719743   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:34.719802   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:34.745609   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:34.745632   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:34.745639   59960 cri.go:89] found id: ""
	I1126 20:12:34.745646   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:34.745704   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.749369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.752915   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:34.752982   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:34.778956   59960 cri.go:89] found id: ""
	I1126 20:12:34.778982   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.778996   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:34.779003   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:34.779059   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:34.805123   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:34.805146   59960 cri.go:89] found id: ""
	I1126 20:12:34.805153   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:34.805211   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:34.808760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:34.808834   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:34.834427   59960 cri.go:89] found id: ""
	I1126 20:12:34.834452   59960 logs.go:282] 0 containers: []
	W1126 20:12:34.834462   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:34.834471   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:34.834482   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:34.912760   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:34.912792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:35.015751   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:35.015790   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:35.046216   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:35.046291   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:35.092725   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:35.092760   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:35.163096   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:35.163130   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:35.191405   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:35.191488   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:35.227181   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:35.227213   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:35.240889   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:35.240922   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:35.311849   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:35.302602    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.303934    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.304899    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.306705    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.307280    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:35.302602    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.303934    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.304899    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.306705    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:35.307280    9888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:35.311871   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:35.311884   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:35.356916   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:35.356951   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:37.883250   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:37.894052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:37.894122   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:37.924918   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:37.924943   59960 cri.go:89] found id: ""
	I1126 20:12:37.924956   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:37.925020   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.928865   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:37.928940   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:37.961907   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:37.961958   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:37.961964   59960 cri.go:89] found id: ""
	I1126 20:12:37.961971   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:37.962035   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.965843   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:37.969339   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:37.969409   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:37.995343   59960 cri.go:89] found id: ""
	I1126 20:12:37.995373   59960 logs.go:282] 0 containers: []
	W1126 20:12:37.995381   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:37.995388   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:37.995491   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:38.022312   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:38.022334   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:38.022339   59960 cri.go:89] found id: ""
	I1126 20:12:38.022346   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:38.022413   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.026080   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.029533   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:38.029622   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:38.060280   59960 cri.go:89] found id: ""
	I1126 20:12:38.060307   59960 logs.go:282] 0 containers: []
	W1126 20:12:38.060346   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:38.060368   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:38.060437   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:38.091248   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:38.091312   59960 cri.go:89] found id: ""
	I1126 20:12:38.091327   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:38.091425   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:38.095836   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:38.095914   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:38.125378   59960 cri.go:89] found id: ""
	I1126 20:12:38.125403   59960 logs.go:282] 0 containers: []
	W1126 20:12:38.125413   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:38.125422   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:38.125436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:38.151847   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:38.151875   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:38.202356   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:38.202391   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:38.247650   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:38.247725   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:38.275709   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:38.275736   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:38.307514   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:38.307542   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:38.404957   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:38.404994   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:38.491924   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:38.491962   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:38.521423   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:38.521460   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:38.598021   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:38.598053   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:38.610973   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:38.611004   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:38.687841   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:38.679705   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.680686   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.681793   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.682498   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.684162   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:38.679705   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.680686   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.681793   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.682498   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:38.684162   10042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:41.188401   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:41.199011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:41.199080   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:41.227170   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:41.227196   59960 cri.go:89] found id: ""
	I1126 20:12:41.227205   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:41.227260   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.230873   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:41.230945   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:41.257484   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:41.257506   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:41.257522   59960 cri.go:89] found id: ""
	I1126 20:12:41.257529   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:41.257584   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.261286   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.265036   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:41.265101   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:41.290579   59960 cri.go:89] found id: ""
	I1126 20:12:41.290645   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.290669   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:41.290682   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:41.290741   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:41.319766   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:41.319786   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:41.319791   59960 cri.go:89] found id: ""
	I1126 20:12:41.319799   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:41.319859   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.323637   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.327077   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:41.327177   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:41.356676   59960 cri.go:89] found id: ""
	I1126 20:12:41.356702   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.356711   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:41.356719   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:41.356783   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:41.385771   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:41.385790   59960 cri.go:89] found id: ""
	I1126 20:12:41.385798   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:41.385852   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:41.389446   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:41.389544   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:41.416642   59960 cri.go:89] found id: ""
	I1126 20:12:41.416710   59960 logs.go:282] 0 containers: []
	W1126 20:12:41.416732   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:41.416754   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:41.416788   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:41.482246   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:41.473419   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.474136   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.475824   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.476403   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.478152   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:41.473419   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.474136   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.475824   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.476403   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:41.478152   10111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:41.482311   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:41.482339   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:41.509950   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:41.510016   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:41.557291   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:41.557324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:41.584211   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:41.584240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:41.666177   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:41.666212   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:41.767334   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:41.767369   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:41.781064   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:41.781089   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:41.825285   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:41.825321   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:41.892538   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:41.892573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:41.920754   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:41.920785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:44.468280   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:44.479465   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:44.479546   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:44.507592   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:44.507615   59960 cri.go:89] found id: ""
	I1126 20:12:44.507623   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:44.507679   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.511422   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:44.511510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:44.543146   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:44.543169   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:44.543174   59960 cri.go:89] found id: ""
	I1126 20:12:44.543181   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:44.543251   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.547022   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.550639   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:44.550719   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:44.579025   59960 cri.go:89] found id: ""
	I1126 20:12:44.579054   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.579063   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:44.579070   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:44.579139   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:44.611309   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:44.611332   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:44.611336   59960 cri.go:89] found id: ""
	I1126 20:12:44.611344   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:44.611407   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.615332   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.619108   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:44.619183   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:44.645161   59960 cri.go:89] found id: ""
	I1126 20:12:44.645185   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.645194   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:44.645201   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:44.645257   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:44.684280   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:44.684301   59960 cri.go:89] found id: ""
	I1126 20:12:44.684310   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:44.684364   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:44.687985   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:44.688057   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:44.713170   59960 cri.go:89] found id: ""
	I1126 20:12:44.713193   59960 logs.go:282] 0 containers: []
	W1126 20:12:44.713202   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:44.713211   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:44.713225   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:44.790764   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:44.782647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.783505   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785179   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785579   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.787022   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:44.782647   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.783505   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785179   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.785579   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:44.787022   10250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:44.790787   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:44.790801   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:44.841911   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:44.842082   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:44.886124   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:44.886155   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:44.956783   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:44.956817   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:44.992805   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:44.992834   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:45.021163   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:45.021190   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:45.060873   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:45.061452   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:45.201027   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:45.201119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:45.266419   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:45.266547   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:45.415986   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:45.416024   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:47.928674   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:47.940771   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:47.940843   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:47.966175   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:47.966194   59960 cri.go:89] found id: ""
	I1126 20:12:47.966202   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:47.966254   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:47.969908   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:47.970011   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:47.997001   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:47.997027   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:47.997032   59960 cri.go:89] found id: ""
	I1126 20:12:47.997040   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:47.997096   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.001757   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.005881   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:48.005980   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:48.031565   59960 cri.go:89] found id: ""
	I1126 20:12:48.031587   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.031595   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:48.031602   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:48.031660   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:48.063357   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:48.063380   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:48.063386   59960 cri.go:89] found id: ""
	I1126 20:12:48.063393   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:48.063450   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.068044   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.073135   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:48.073260   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:48.103364   59960 cri.go:89] found id: ""
	I1126 20:12:48.103391   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.103401   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:48.103408   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:48.103511   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:48.134700   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:48.134720   59960 cri.go:89] found id: ""
	I1126 20:12:48.134728   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:48.134795   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:48.138489   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:48.138568   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:48.164615   59960 cri.go:89] found id: ""
	I1126 20:12:48.164639   59960 logs.go:282] 0 containers: []
	W1126 20:12:48.164648   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:48.164657   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:48.164670   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:48.238206   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:48.238245   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:48.270325   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:48.270352   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:48.316632   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:48.316660   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:48.328526   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:48.328554   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:48.370051   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:48.370081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:48.397236   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:48.397264   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:48.478994   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:48.479029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:48.586134   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:48.586167   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:48.661172   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:48.650880   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.652436   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.653061   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.654717   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.655290   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:48.650880   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.652436   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.653061   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.654717   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:48.655290   10438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:48.661195   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:48.661211   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:48.689769   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:48.689797   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.235721   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:51.246961   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:51.247038   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:51.276386   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:51.276410   59960 cri.go:89] found id: ""
	I1126 20:12:51.276419   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:51.276472   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.280282   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:51.280363   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:51.307844   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:51.307875   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.307880   59960 cri.go:89] found id: ""
	I1126 20:12:51.307888   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:51.307944   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.311885   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.315516   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:51.315643   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:51.343040   59960 cri.go:89] found id: ""
	I1126 20:12:51.343068   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.343077   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:51.343084   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:51.343144   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:51.371879   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:51.371901   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:51.371907   59960 cri.go:89] found id: ""
	I1126 20:12:51.371920   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:51.371976   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.375815   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.379444   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:51.379518   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:51.409590   59960 cri.go:89] found id: ""
	I1126 20:12:51.409615   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.409624   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:51.409630   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:51.409688   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:51.440665   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:51.440692   59960 cri.go:89] found id: ""
	I1126 20:12:51.440701   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:51.440756   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:51.444486   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:51.444565   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:51.470661   59960 cri.go:89] found id: ""
	I1126 20:12:51.470686   59960 logs.go:282] 0 containers: []
	W1126 20:12:51.470695   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:51.470705   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:51.470749   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:51.482794   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:51.482823   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:51.570460   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:51.561457   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.562296   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.563970   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.564288   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.566409   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:51.561457   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.562296   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.563970   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.564288   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:51.566409   10526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:51.570484   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:51.570498   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:51.596696   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:51.596724   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:51.657780   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:51.657820   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:51.736300   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:51.736338   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:51.772635   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:51.772664   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:51.808014   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:51.808042   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:51.909775   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:51.909814   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:51.955849   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:51.955887   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:51.986011   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:51.986040   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:54.569991   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:54.582000   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:54.582074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:54.610486   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:54.610506   59960 cri.go:89] found id: ""
	I1126 20:12:54.610515   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:54.610573   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.614711   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:54.614787   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:54.641548   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:54.641571   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:54.641577   59960 cri.go:89] found id: ""
	I1126 20:12:54.641584   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:54.641645   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.645430   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.649375   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:54.649465   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:54.677350   59960 cri.go:89] found id: ""
	I1126 20:12:54.677377   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.677386   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:54.677399   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:54.677456   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:54.706226   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:54.706249   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:54.706254   59960 cri.go:89] found id: ""
	I1126 20:12:54.706261   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:54.706315   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.710188   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.713666   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:54.713759   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:54.745132   59960 cri.go:89] found id: ""
	I1126 20:12:54.745158   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.745167   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:54.745174   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:54.745235   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:54.774016   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:54.774039   59960 cri.go:89] found id: ""
	I1126 20:12:54.774047   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:54.774105   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:54.778220   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:54.778293   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:54.807768   59960 cri.go:89] found id: ""
	I1126 20:12:54.807831   59960 logs.go:282] 0 containers: []
	W1126 20:12:54.807845   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:54.807855   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:54.807867   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:54.904620   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:54.904657   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:54.931520   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:54.931548   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:54.974322   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:54.974360   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:55.010146   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:55.010176   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:55.044963   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:55.045006   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:55.060490   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:55.060520   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:55.132694   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:55.124286   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.124937   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.126610   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.127207   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.128929   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:55.124286   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.124937   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.126610   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.127207   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:55.128929   10699 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:12:55.132729   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:55.132746   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:55.180103   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:55.180139   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:55.258117   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:55.258154   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:55.289687   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:55.289716   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:57.870076   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:12:57.881883   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:12:57.881978   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:12:57.911809   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:57.911833   59960 cri.go:89] found id: ""
	I1126 20:12:57.911841   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:12:57.911899   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.915590   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:12:57.915685   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:12:57.943647   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:57.943671   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:57.943677   59960 cri.go:89] found id: ""
	I1126 20:12:57.943684   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:12:57.943747   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.947699   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:57.951409   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:12:57.951489   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:12:57.979114   59960 cri.go:89] found id: ""
	I1126 20:12:57.979138   59960 logs.go:282] 0 containers: []
	W1126 20:12:57.979147   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:12:57.979154   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:12:57.979214   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:12:58.009760   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:58.009781   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:58.009787   59960 cri.go:89] found id: ""
	I1126 20:12:58.009794   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:12:58.009855   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.013598   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.017135   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:12:58.017207   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:12:58.047222   59960 cri.go:89] found id: ""
	I1126 20:12:58.047247   59960 logs.go:282] 0 containers: []
	W1126 20:12:58.047255   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:12:58.047262   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:12:58.047324   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:12:58.094431   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:58.094510   59960 cri.go:89] found id: ""
	I1126 20:12:58.094524   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:12:58.094586   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:12:58.099004   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:12:58.099099   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:12:58.126698   59960 cri.go:89] found id: ""
	I1126 20:12:58.126727   59960 logs.go:282] 0 containers: []
	W1126 20:12:58.126735   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:12:58.126744   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:12:58.126756   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:12:58.155602   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:12:58.155629   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:12:58.196131   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:12:58.196166   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:12:58.243760   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:12:58.243793   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:12:58.314546   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:12:58.314583   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:12:58.347422   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:12:58.347451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:12:58.373247   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:12:58.373277   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:12:58.448488   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:12:58.448524   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:12:58.480586   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:12:58.480615   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:12:58.586743   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:12:58.586799   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:12:58.600003   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:12:58.600029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:12:58.682648   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:12:58.673481   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.674315   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.675021   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.676838   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.677737   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:12:58.673481   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.674315   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.675021   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.676838   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:12:58.677737   10861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:01.183502   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:01.195046   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:01.195153   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:01.224257   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:01.224281   59960 cri.go:89] found id: ""
	I1126 20:13:01.224289   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:01.224365   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.228134   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:01.228206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:01.265990   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:01.266014   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:01.266019   59960 cri.go:89] found id: ""
	I1126 20:13:01.266027   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:01.266084   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.270682   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.274505   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:01.274580   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:01.302962   59960 cri.go:89] found id: ""
	I1126 20:13:01.302989   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.302998   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:01.303005   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:01.303072   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:01.335599   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:01.335621   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:01.335627   59960 cri.go:89] found id: ""
	I1126 20:13:01.335635   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:01.335689   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.339621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.343531   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:01.343614   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:01.369553   59960 cri.go:89] found id: ""
	I1126 20:13:01.369578   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.369588   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:01.369594   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:01.369657   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:01.402170   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:01.402197   59960 cri.go:89] found id: ""
	I1126 20:13:01.402205   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:01.402266   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:01.406260   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:01.406336   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:01.432250   59960 cri.go:89] found id: ""
	I1126 20:13:01.432326   59960 logs.go:282] 0 containers: []
	W1126 20:13:01.432352   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:01.432362   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:01.432378   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:01.473457   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:01.473491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:01.525391   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:01.525445   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:01.557734   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:01.557765   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:01.650427   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:01.650465   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:01.696040   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:01.696070   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:01.801258   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:01.801297   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:01.872498   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:01.872534   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:01.912672   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:01.912725   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:01.927976   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:01.928008   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:02.002577   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:01.992139   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.993221   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.994589   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996153   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996915   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:01.992139   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.993221   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.994589   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996153   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:01.996915   10989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:02.002601   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:02.002614   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:04.532051   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:04.544501   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:04.544572   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:04.571414   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:04.571435   59960 cri.go:89] found id: ""
	I1126 20:13:04.571443   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:04.571494   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.575072   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:04.575149   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:04.603292   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:04.603312   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:04.603316   59960 cri.go:89] found id: ""
	I1126 20:13:04.603326   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:04.603378   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.607479   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.610889   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:04.610970   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:04.636626   59960 cri.go:89] found id: ""
	I1126 20:13:04.636652   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.636662   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:04.636668   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:04.636745   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:04.665487   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:04.665511   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:04.665516   59960 cri.go:89] found id: ""
	I1126 20:13:04.665523   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:04.665599   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.669516   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.673155   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:04.673221   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:04.705848   59960 cri.go:89] found id: ""
	I1126 20:13:04.705873   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.705882   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:04.705888   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:04.705971   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:04.741254   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:04.741277   59960 cri.go:89] found id: ""
	I1126 20:13:04.741285   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:04.741340   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:04.745396   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:04.745469   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:04.777680   59960 cri.go:89] found id: ""
	I1126 20:13:04.777713   59960 logs.go:282] 0 containers: []
	W1126 20:13:04.777723   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:04.777732   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:04.777744   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:04.884972   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:04.885008   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:04.898040   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:04.898066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:04.971530   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:04.971610   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:05.003493   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:05.003573   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:05.082481   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:05.082515   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:05.116089   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:05.116119   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:05.186979   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:05.178888   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.179664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181297   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.183205   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:05.178888   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.179664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181297   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.181664   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:05.183205   11103 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:05.187006   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:05.187020   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:05.214669   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:05.214698   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:05.261207   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:05.261238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:05.306449   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:05.306482   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:07.838042   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:07.850498   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:07.850567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:07.878108   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:07.878130   59960 cri.go:89] found id: ""
	I1126 20:13:07.878138   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:07.878197   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.882580   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:07.882654   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:07.911855   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:07.911886   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:07.911891   59960 cri.go:89] found id: ""
	I1126 20:13:07.911899   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:07.911960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.915705   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.919300   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:07.919371   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:07.951018   59960 cri.go:89] found id: ""
	I1126 20:13:07.951044   59960 logs.go:282] 0 containers: []
	W1126 20:13:07.951053   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:07.951059   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:07.951119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:07.978929   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:07.978951   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:07.978956   59960 cri.go:89] found id: ""
	I1126 20:13:07.978963   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:07.979017   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.983189   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:07.986830   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:07.986903   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:08.016199   59960 cri.go:89] found id: ""
	I1126 20:13:08.016231   59960 logs.go:282] 0 containers: []
	W1126 20:13:08.016240   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:08.016251   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:08.016325   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:08.053456   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:08.053528   59960 cri.go:89] found id: ""
	I1126 20:13:08.053549   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:08.053644   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:08.057986   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:08.058066   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:08.087479   59960 cri.go:89] found id: ""
	I1126 20:13:08.087508   59960 logs.go:282] 0 containers: []
	W1126 20:13:08.087517   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:08.087533   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:08.087546   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:08.132468   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:08.132502   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:08.176740   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:08.176778   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:08.250131   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:08.250178   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:08.280307   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:08.280337   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:08.310477   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:08.310506   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:08.413610   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:08.413648   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:08.484512   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:08.474848   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.476074   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.477530   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.478182   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.479748   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:08.474848   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.476074   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.477530   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.478182   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:08.479748   11250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:08.484538   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:08.484551   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:08.561138   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:08.561172   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:08.596362   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:08.596439   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:08.609838   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:08.609909   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.136633   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:11.147922   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:11.148007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:11.179880   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.179915   59960 cri.go:89] found id: ""
	I1126 20:13:11.179923   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:11.180040   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.184887   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:11.184958   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:11.213848   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:11.213872   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:11.213878   59960 cri.go:89] found id: ""
	I1126 20:13:11.213885   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:11.213981   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.217804   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.221572   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:11.221649   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:11.258706   59960 cri.go:89] found id: ""
	I1126 20:13:11.258783   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.258799   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:11.258806   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:11.258880   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:11.289663   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:11.289686   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:11.289692   59960 cri.go:89] found id: ""
	I1126 20:13:11.289699   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:11.289755   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.293522   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.298425   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:11.298504   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:11.325442   59960 cri.go:89] found id: ""
	I1126 20:13:11.325508   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.325534   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:11.325552   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:11.325636   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:11.352745   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:11.352808   59960 cri.go:89] found id: ""
	I1126 20:13:11.352834   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:11.352923   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:11.356710   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:11.356824   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:11.384378   59960 cri.go:89] found id: ""
	I1126 20:13:11.384402   59960 logs.go:282] 0 containers: []
	W1126 20:13:11.384412   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:11.384421   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:11.384433   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:11.396869   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:11.396938   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:11.467278   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:11.459180   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.459948   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.461472   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.462000   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.463589   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:11.459180   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.459948   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.461472   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.462000   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:11.463589   11348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:11.467302   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:11.467316   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:11.494598   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:11.494626   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:11.533337   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:11.533372   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:11.559364   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:11.559392   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:11.642834   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:11.642873   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:11.680367   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:11.680393   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:11.784039   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:11.784075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:11.834225   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:11.834260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:11.905094   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:11.905129   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.439226   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:14.451155   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:14.451245   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:14.493752   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:14.493776   59960 cri.go:89] found id: ""
	I1126 20:13:14.493784   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:14.493840   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.497504   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:14.497627   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:14.524624   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:14.524646   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:14.524652   59960 cri.go:89] found id: ""
	I1126 20:13:14.524659   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:14.524743   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.528418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.532417   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:14.532512   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:14.559402   59960 cri.go:89] found id: ""
	I1126 20:13:14.559477   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.559491   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:14.559498   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:14.559556   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:14.588825   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:14.588848   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.588853   59960 cri.go:89] found id: ""
	I1126 20:13:14.588860   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:14.588921   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.593022   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.596763   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:14.596831   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:14.624835   59960 cri.go:89] found id: ""
	I1126 20:13:14.624858   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.624867   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:14.624874   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:14.624929   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:14.650771   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:14.650846   59960 cri.go:89] found id: ""
	I1126 20:13:14.650872   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:14.650960   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:14.656095   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:14.656219   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:14.682420   59960 cri.go:89] found id: ""
	I1126 20:13:14.682493   59960 logs.go:282] 0 containers: []
	W1126 20:13:14.682517   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:14.682540   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:14.682581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:14.722936   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:14.722971   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:14.754105   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:14.754134   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:14.786128   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:14.786156   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:14.798341   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:14.798370   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:14.873270   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:14.865757   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.866349   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.867866   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.868348   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.869793   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:14.865757   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.866349   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.867866   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.868348   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:14.869793   11515 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:14.873292   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:14.873306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:14.920206   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:14.920240   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:14.996591   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:14.996624   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:15.024423   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:15.024451   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:15.105848   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:15.105881   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:15.205091   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:15.205170   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:17.734682   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:17.745326   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:17.745391   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:17.773503   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:17.773525   59960 cri.go:89] found id: ""
	I1126 20:13:17.773534   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:17.773621   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.777326   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:17.777400   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:17.805117   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:17.805139   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:17.805144   59960 cri.go:89] found id: ""
	I1126 20:13:17.805151   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:17.805206   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.809065   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.812530   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:17.812601   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:17.841430   59960 cri.go:89] found id: ""
	I1126 20:13:17.841456   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.841465   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:17.841472   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:17.841530   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:17.868985   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:17.869009   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:17.869014   59960 cri.go:89] found id: ""
	I1126 20:13:17.869024   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:17.869081   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.882183   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.885701   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:17.885794   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:17.918849   59960 cri.go:89] found id: ""
	I1126 20:13:17.918872   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.918880   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:17.918887   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:17.918947   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:17.949773   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:17.949849   59960 cri.go:89] found id: ""
	I1126 20:13:17.949872   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:17.949996   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:17.953636   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:17.953705   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:17.980243   59960 cri.go:89] found id: ""
	I1126 20:13:17.980266   59960 logs.go:282] 0 containers: []
	W1126 20:13:17.980275   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:17.980284   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:17.980295   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:18.011301   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:18.011331   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:18.038493   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:18.038526   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:18.080613   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:18.080641   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:18.160950   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:18.160988   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:18.262170   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:18.262215   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:18.275569   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:18.275593   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:18.351781   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:18.343534   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.344057   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.345769   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.346381   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.347931   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:18.343534   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.344057   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.345769   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.346381   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:18.347931   11661 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:18.351805   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:18.351817   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:18.389344   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:18.389375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:18.434916   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:18.434949   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:18.527668   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:18.527702   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.058771   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:21.073274   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:21.073339   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:21.121326   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:21.121345   59960 cri.go:89] found id: ""
	I1126 20:13:21.121356   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:21.121415   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.130434   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:21.130507   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:21.164100   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:21.164161   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:21.164191   59960 cri.go:89] found id: ""
	I1126 20:13:21.164212   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:21.164289   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.168566   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.173217   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:21.173328   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:21.201882   59960 cri.go:89] found id: ""
	I1126 20:13:21.202006   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.202036   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:21.202055   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:21.202157   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:21.230033   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:21.230099   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:21.230120   59960 cri.go:89] found id: ""
	I1126 20:13:21.230144   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:21.230222   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.234188   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.238625   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:21.238709   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:21.266450   59960 cri.go:89] found id: ""
	I1126 20:13:21.266476   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.266485   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:21.266492   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:21.266567   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:21.293192   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.293221   59960 cri.go:89] found id: ""
	I1126 20:13:21.293229   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:21.293320   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:21.297074   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:21.297146   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:21.325608   59960 cri.go:89] found id: ""
	I1126 20:13:21.325635   59960 logs.go:282] 0 containers: []
	W1126 20:13:21.325644   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:21.325653   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:21.325665   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:21.365168   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:21.365201   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:21.407809   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:21.407841   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:21.490502   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:21.490538   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:21.593562   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:21.593598   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:21.620251   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:21.620280   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:21.696224   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:21.696260   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:21.724295   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:21.724324   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:21.754121   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:21.754146   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:21.785320   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:21.785347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:21.797528   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:21.797556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:21.871066   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:21.862248   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.863127   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.864832   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.865449   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.867089   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:21.862248   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.863127   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.864832   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.865449   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:21.867089   11832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:24.371542   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:24.382011   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:24.382074   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:24.413323   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:24.413351   59960 cri.go:89] found id: ""
	I1126 20:13:24.413360   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:24.413418   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.417248   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:24.417327   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:24.443549   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:24.443571   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:24.443576   59960 cri.go:89] found id: ""
	I1126 20:13:24.443583   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:24.443638   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.447448   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.450865   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:24.450933   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:24.481019   59960 cri.go:89] found id: ""
	I1126 20:13:24.481043   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.481052   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:24.481059   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:24.481119   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:24.509327   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:24.509349   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:24.509354   59960 cri.go:89] found id: ""
	I1126 20:13:24.509361   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:24.509416   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.512867   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.516116   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:24.516181   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:24.546284   59960 cri.go:89] found id: ""
	I1126 20:13:24.546361   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.546390   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:24.546405   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:24.546464   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:24.571968   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:24.572032   59960 cri.go:89] found id: ""
	I1126 20:13:24.572047   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:24.572113   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:24.575760   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:24.575830   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:24.603299   59960 cri.go:89] found id: ""
	I1126 20:13:24.603325   59960 logs.go:282] 0 containers: []
	W1126 20:13:24.603334   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:24.603373   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:24.603390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:24.642562   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:24.642595   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:24.696607   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:24.696640   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:24.724494   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:24.724523   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:24.805443   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:24.805477   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:24.880673   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:24.872137   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.872936   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.874737   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.875329   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.876994   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:24.872137   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.872936   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.874737   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.875329   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:24.876994   11925 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:24.880694   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:24.880708   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:24.912019   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:24.912047   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:24.998475   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:24.998511   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:25.027058   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:25.027084   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:25.060548   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:25.060577   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:25.167756   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:25.167795   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:27.682279   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:27.693116   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:27.693189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:27.720687   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:27.720706   59960 cri.go:89] found id: ""
	I1126 20:13:27.720713   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:27.720765   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.724317   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:27.724388   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:27.751345   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:27.751369   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:27.751375   59960 cri.go:89] found id: ""
	I1126 20:13:27.751384   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:27.751445   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.755313   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.758668   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:27.758738   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:27.788496   59960 cri.go:89] found id: ""
	I1126 20:13:27.788567   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.788592   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:27.788611   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:27.788703   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:27.815714   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:27.815743   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:27.815749   59960 cri.go:89] found id: ""
	I1126 20:13:27.815757   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:27.815831   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.819360   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.822959   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:27.823038   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:27.853270   59960 cri.go:89] found id: ""
	I1126 20:13:27.853316   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.853326   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:27.853333   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:27.853403   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:27.880677   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:27.880701   59960 cri.go:89] found id: ""
	I1126 20:13:27.880710   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:27.880766   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:27.884425   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:27.884499   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:27.917060   59960 cri.go:89] found id: ""
	I1126 20:13:27.917126   59960 logs.go:282] 0 containers: []
	W1126 20:13:27.917150   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:27.917183   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:27.917213   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:27.929246   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:27.929321   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:28.005492   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:27.995998   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.996970   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.999116   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.000043   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.001867   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:27.995998   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.996970   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:27.999116   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.000043   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:28.001867   12038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:28.005554   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:28.005581   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:28.032388   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:28.032414   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:28.090244   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:28.090279   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:28.140049   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:28.140081   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:28.217015   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:28.217052   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:28.252634   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:28.252663   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:28.356298   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:28.356347   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:28.391198   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:28.391227   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:28.470669   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:28.470706   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:31.018712   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:31.029520   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:31.029594   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:31.067229   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:31.067249   59960 cri.go:89] found id: ""
	I1126 20:13:31.067257   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:31.067315   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.071728   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:31.071796   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:31.100937   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:31.101015   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:31.101024   59960 cri.go:89] found id: ""
	I1126 20:13:31.101032   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:31.101092   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.106006   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.109883   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:31.110020   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:31.140073   59960 cri.go:89] found id: ""
	I1126 20:13:31.140098   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.140107   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:31.140114   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:31.140177   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:31.170126   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:31.170150   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:31.170155   59960 cri.go:89] found id: ""
	I1126 20:13:31.170163   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:31.170220   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.175522   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.180015   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:31.180137   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:31.216744   59960 cri.go:89] found id: ""
	I1126 20:13:31.216771   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.216781   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:31.216787   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:31.216847   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:31.244620   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:31.244653   59960 cri.go:89] found id: ""
	I1126 20:13:31.244661   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:31.244727   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:31.248677   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:31.248770   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:31.275812   59960 cri.go:89] found id: ""
	I1126 20:13:31.275890   59960 logs.go:282] 0 containers: []
	W1126 20:13:31.275914   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:31.275936   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:31.275972   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:31.308954   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:31.308981   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:31.404058   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:31.404140   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:31.449144   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:31.449177   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:31.526538   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:31.526575   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:31.613358   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:31.613393   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:31.626272   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:31.626300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:31.701051   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:31.692350   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.693035   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.694572   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.695120   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.696599   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:31.692350   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.693035   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.694572   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.695120   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:31.696599   12209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:31.701076   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:31.701089   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:31.726047   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:31.726075   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:31.770205   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:31.770246   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:31.800872   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:31.800898   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.331337   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:34.343013   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:34.343079   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:34.369127   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:34.369186   59960 cri.go:89] found id: ""
	I1126 20:13:34.369220   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:34.369305   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.372919   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:34.372984   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:34.400785   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:34.400806   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:34.400811   59960 cri.go:89] found id: ""
	I1126 20:13:34.400818   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:34.400871   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.404967   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.408568   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:34.408648   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:34.434956   59960 cri.go:89] found id: ""
	I1126 20:13:34.434981   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.434990   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:34.434996   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:34.435051   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:34.472918   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:34.472943   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:34.472948   59960 cri.go:89] found id: ""
	I1126 20:13:34.472956   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:34.473009   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.476556   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.480021   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:34.480097   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:34.506491   59960 cri.go:89] found id: ""
	I1126 20:13:34.506513   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.506522   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:34.506528   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:34.506587   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:34.534595   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.534618   59960 cri.go:89] found id: ""
	I1126 20:13:34.534627   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:34.534681   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:34.542373   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:34.542487   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:34.569404   59960 cri.go:89] found id: ""
	I1126 20:13:34.569439   59960 logs.go:282] 0 containers: []
	W1126 20:13:34.569449   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:34.569473   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:34.569491   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:34.594901   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:34.594926   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:34.661252   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:34.661357   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:34.736470   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:34.736504   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:34.767635   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:34.767659   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:34.849541   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:34.849578   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:34.890089   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:34.890122   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:34.918362   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:34.918390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:34.955774   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:34.955800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:35.056965   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:35.057001   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:35.078639   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:35.078668   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:35.151655   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:35.143337   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.143918   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.145438   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.146046   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.147630   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:35.143337   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.143918   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.145438   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.146046   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:35.147630   12379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:37.653306   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:37.665236   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:37.665306   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:37.692381   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:37.692404   59960 cri.go:89] found id: ""
	I1126 20:13:37.692420   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:37.692475   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.696411   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:37.696485   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:37.733416   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:37.733447   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:37.733452   59960 cri.go:89] found id: ""
	I1126 20:13:37.733459   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:37.733512   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.737487   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.740759   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:37.740827   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:37.770540   59960 cri.go:89] found id: ""
	I1126 20:13:37.770563   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.770571   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:37.770578   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:37.770645   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:37.798542   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:37.798566   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:37.798572   59960 cri.go:89] found id: ""
	I1126 20:13:37.798579   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:37.798632   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.802507   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.806007   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:37.806128   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:37.831752   59960 cri.go:89] found id: ""
	I1126 20:13:37.831780   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.831789   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:37.831796   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:37.831911   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:37.859491   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:37.859516   59960 cri.go:89] found id: ""
	I1126 20:13:37.859526   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:37.859608   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:37.863305   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:37.863407   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:37.890262   59960 cri.go:89] found id: ""
	I1126 20:13:37.890324   59960 logs.go:282] 0 containers: []
	W1126 20:13:37.890347   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:37.890370   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:37.890389   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:37.915303   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:37.915334   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:38.015981   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:38.016018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:38.028479   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:38.028518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:38.117235   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:38.107607   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.108494   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.110529   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.111224   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.112955   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:38.107607   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.108494   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.110529   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.111224   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:38.112955   12465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:38.117268   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:38.117293   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:38.146073   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:38.146106   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:38.223055   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:38.223091   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:38.256738   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:38.256769   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:38.284204   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:38.284234   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:38.322205   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:38.322237   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:38.365768   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:38.365800   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:40.946037   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:40.957084   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:40.957219   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:40.988160   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:40.988223   59960 cri.go:89] found id: ""
	I1126 20:13:40.988247   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:40.988330   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:40.991862   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:40.991975   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:41.021645   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:41.021671   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:41.021676   59960 cri.go:89] found id: ""
	I1126 20:13:41.021683   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:41.021776   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.025458   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.028751   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:41.028818   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:41.055272   59960 cri.go:89] found id: ""
	I1126 20:13:41.055297   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.055306   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:41.055313   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:41.055373   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:41.083272   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:41.083293   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:41.083298   59960 cri.go:89] found id: ""
	I1126 20:13:41.083306   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:41.083361   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.089116   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.092770   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:41.092882   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:41.119939   59960 cri.go:89] found id: ""
	I1126 20:13:41.119969   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.119978   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:41.119985   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:41.120085   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:41.149635   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:41.149657   59960 cri.go:89] found id: ""
	I1126 20:13:41.149666   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:41.149719   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:41.153346   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:41.153420   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:41.180294   59960 cri.go:89] found id: ""
	I1126 20:13:41.180320   59960 logs.go:282] 0 containers: []
	W1126 20:13:41.180329   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:41.180338   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:41.180350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:41.207608   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:41.207638   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:41.250184   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:41.250217   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:41.280787   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:41.280815   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:41.350595   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:41.339246   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.340025   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.341777   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.342622   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.345147   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:41.339246   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.340025   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.341777   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.342622   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:41.345147   12613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:41.350618   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:41.350631   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:41.395571   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:41.395607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:41.471537   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:41.471576   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:41.503158   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:41.503187   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:41.581612   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:41.581647   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:41.616210   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:41.616238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:41.712278   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:41.712311   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:44.224835   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:44.235354   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:44.235427   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:44.262020   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:44.262040   59960 cri.go:89] found id: ""
	I1126 20:13:44.262047   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:44.262100   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.266500   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:44.266621   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:44.293469   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:44.293492   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:44.293498   59960 cri.go:89] found id: ""
	I1126 20:13:44.293515   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:44.293592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.297513   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.301293   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:44.301379   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:44.331229   59960 cri.go:89] found id: ""
	I1126 20:13:44.331252   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.331260   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:44.331266   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:44.331326   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:44.358510   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:44.358529   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:44.358534   59960 cri.go:89] found id: ""
	I1126 20:13:44.358540   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:44.358597   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.362369   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.365719   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:44.365788   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:44.401237   59960 cri.go:89] found id: ""
	I1126 20:13:44.401303   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.401326   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:44.401348   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:44.401437   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:44.428506   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:44.428524   59960 cri.go:89] found id: ""
	I1126 20:13:44.428537   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:44.428592   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:44.432302   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:44.432379   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:44.461193   59960 cri.go:89] found id: ""
	I1126 20:13:44.461216   59960 logs.go:282] 0 containers: []
	W1126 20:13:44.461225   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:44.461234   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:44.461245   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:44.472842   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:44.472911   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:44.552602   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:44.536833   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.537581   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.546763   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.547452   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.548655   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:44.536833   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.537581   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.546763   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.547452   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:44.548655   12725 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:44.552629   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:44.552642   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:44.579143   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:44.579171   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:44.608447   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:44.608472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:44.634421   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:44.634447   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:44.669334   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:44.669362   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:44.770710   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:44.770785   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:44.815986   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:44.816016   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:44.860293   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:44.860327   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:44.936110   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:44.936144   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:47.514839   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:47.528244   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:47.528398   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:47.557240   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:47.557263   59960 cri.go:89] found id: ""
	I1126 20:13:47.557271   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:47.557328   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.561044   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:47.561146   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:47.586866   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:47.586888   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:47.586894   59960 cri.go:89] found id: ""
	I1126 20:13:47.586901   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:47.586956   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.591194   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.594829   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:47.594905   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:47.621081   59960 cri.go:89] found id: ""
	I1126 20:13:47.621104   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.621113   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:47.621120   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:47.621182   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:47.649583   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:47.649605   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:47.649610   59960 cri.go:89] found id: ""
	I1126 20:13:47.649618   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:47.649673   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.655090   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.659029   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:47.659096   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:47.685101   59960 cri.go:89] found id: ""
	I1126 20:13:47.685125   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.685134   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:47.685141   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:47.685198   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:47.712581   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:47.712603   59960 cri.go:89] found id: ""
	I1126 20:13:47.712612   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:47.712673   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:47.716384   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:47.716461   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:47.746287   59960 cri.go:89] found id: ""
	I1126 20:13:47.746321   59960 logs.go:282] 0 containers: []
	W1126 20:13:47.746330   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:47.746357   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:47.746375   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:47.776577   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:47.776607   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:47.810845   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:47.810874   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:47.851317   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:47.851350   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:47.897021   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:47.897054   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:47.925761   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:47.925792   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:47.953836   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:47.953863   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:48.054533   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:48.054569   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:48.074474   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:48.074505   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:48.148938   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:48.137331   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.137950   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.139682   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.140242   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.143726   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:48.137331   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.137950   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.139682   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.140242   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:48.143726   12917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:48.148963   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:48.148977   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:48.231199   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:48.231234   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:50.823233   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:50.833805   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:50.833878   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:50.862309   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:50.862333   59960 cri.go:89] found id: ""
	I1126 20:13:50.862342   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:50.862396   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.865957   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:50.866034   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:50.892542   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:50.892565   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:50.892571   59960 cri.go:89] found id: ""
	I1126 20:13:50.892578   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:50.892632   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.896328   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.899831   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:50.899905   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:50.931031   59960 cri.go:89] found id: ""
	I1126 20:13:50.931098   59960 logs.go:282] 0 containers: []
	W1126 20:13:50.931112   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:50.931119   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:50.931176   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:50.958547   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:50.958580   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:50.958586   59960 cri.go:89] found id: ""
	I1126 20:13:50.958594   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:50.958649   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.962711   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:50.966380   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:50.966453   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:50.998188   59960 cri.go:89] found id: ""
	I1126 20:13:50.998483   59960 logs.go:282] 0 containers: []
	W1126 20:13:50.998498   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:50.998505   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:50.998592   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:51.031422   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:51.031447   59960 cri.go:89] found id: ""
	I1126 20:13:51.031462   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:51.031519   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:51.035715   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:51.035788   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:51.077429   59960 cri.go:89] found id: ""
	I1126 20:13:51.077452   59960 logs.go:282] 0 containers: []
	W1126 20:13:51.077460   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:51.077469   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:51.077481   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:51.105578   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:51.105609   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:51.188473   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:51.188518   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:51.220853   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:51.220886   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:51.304811   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:51.304848   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:51.337094   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:51.337162   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:51.434145   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:51.434183   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:51.474781   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:51.474815   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:51.523360   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:51.523390   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:51.556210   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:51.556238   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:51.568960   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:51.568989   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:51.646125   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:51.637986   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.638634   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640319   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640884   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.642607   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:51.637986   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.638634   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640319   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.640884   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:51.642607   13063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:54.147140   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:54.159570   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:54.159641   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:54.190129   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:54.190150   59960 cri.go:89] found id: ""
	I1126 20:13:54.190158   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:54.190221   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.193723   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:54.193795   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:54.221859   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:54.221881   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:54.221886   59960 cri.go:89] found id: ""
	I1126 20:13:54.221893   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:54.221986   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.225619   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.229615   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:54.229686   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:54.257427   59960 cri.go:89] found id: ""
	I1126 20:13:54.257454   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.257464   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:54.257470   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:54.257528   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:54.283499   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:54.283522   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:54.283528   59960 cri.go:89] found id: ""
	I1126 20:13:54.283535   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:54.283591   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.287279   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.291072   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:54.291164   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:54.320377   59960 cri.go:89] found id: ""
	I1126 20:13:54.320409   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.320418   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:54.320424   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:54.320490   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:54.346357   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:54.346388   59960 cri.go:89] found id: ""
	I1126 20:13:54.346397   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:54.346453   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:54.350217   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:54.350337   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:54.387000   59960 cri.go:89] found id: ""
	I1126 20:13:54.387033   59960 logs.go:282] 0 containers: []
	W1126 20:13:54.387042   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:54.387052   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:54.387064   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:54.398981   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:54.399006   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:54.424733   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:54.424761   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:54.464124   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:54.464199   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:54.516097   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:54.516149   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:54.597621   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:54.597656   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:54.626882   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:54.626916   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:54.706226   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:54.706262   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:54.777575   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:54.768229   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.769042   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.770705   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.771452   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.773075   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:54.768229   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.769042   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.770705   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.771452   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:54.773075   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:54.777599   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:54.777612   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:54.808526   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:54.808556   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:54.839385   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:54.839412   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:57.435357   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:13:57.446250   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:13:57.446321   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:13:57.476511   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:57.476531   59960 cri.go:89] found id: ""
	I1126 20:13:57.476539   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:13:57.476595   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.480521   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:13:57.480599   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:13:57.508216   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:13:57.508239   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:57.508244   59960 cri.go:89] found id: ""
	I1126 20:13:57.508251   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:13:57.508312   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.512264   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.515930   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:13:57.516007   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:13:57.546712   59960 cri.go:89] found id: ""
	I1126 20:13:57.546737   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.546746   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:13:57.546753   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:13:57.546811   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:13:57.575286   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:57.575308   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:57.575314   59960 cri.go:89] found id: ""
	I1126 20:13:57.575321   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:13:57.575403   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.579177   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.582844   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:13:57.582947   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:13:57.610240   59960 cri.go:89] found id: ""
	I1126 20:13:57.610268   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.610276   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:13:57.610282   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:13:57.610366   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:13:57.637690   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:57.637715   59960 cri.go:89] found id: ""
	I1126 20:13:57.637722   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:13:57.637804   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:13:57.641691   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:13:57.641816   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:13:57.673478   59960 cri.go:89] found id: ""
	I1126 20:13:57.673512   59960 logs.go:282] 0 containers: []
	W1126 20:13:57.673521   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:13:57.673546   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:13:57.673565   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:13:57.724644   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:13:57.724677   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:13:57.801587   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:13:57.801622   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:13:57.846990   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:13:57.847020   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:13:57.948301   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:13:57.948336   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:13:57.960477   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:13:57.960510   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:13:58.036195   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:13:58.028003   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.028530   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030166   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030875   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.032666   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:13:58.028003   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.028530   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030166   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.030875   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:13:58.032666   13301 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:13:58.036262   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:13:58.036289   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:13:58.071247   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:13:58.071284   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:13:58.102552   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:13:58.102582   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:13:58.131358   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:13:58.131450   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:13:58.207844   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:13:58.207883   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:00.754664   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:00.765702   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:00.765771   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:00.806554   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:00.806579   59960 cri.go:89] found id: ""
	I1126 20:14:00.806587   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:00.806641   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.810501   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:00.810586   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:00.838112   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:00.838139   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:00.838144   59960 cri.go:89] found id: ""
	I1126 20:14:00.838152   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:00.838207   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.842001   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.845613   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:00.845684   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:00.874701   59960 cri.go:89] found id: ""
	I1126 20:14:00.874726   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.874735   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:00.874742   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:00.874821   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:00.903003   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:00.903027   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:00.903032   59960 cri.go:89] found id: ""
	I1126 20:14:00.903039   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:00.903097   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.907398   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.911095   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:00.911169   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:00.937717   59960 cri.go:89] found id: ""
	I1126 20:14:00.937741   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.937750   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:00.937757   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:00.937815   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:00.964659   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:00.964683   59960 cri.go:89] found id: ""
	I1126 20:14:00.964692   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:00.964761   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:00.969052   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:00.969128   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:00.996896   59960 cri.go:89] found id: ""
	I1126 20:14:00.996921   59960 logs.go:282] 0 containers: []
	W1126 20:14:00.996930   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:00.996940   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:00.996968   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:01.052982   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:01.053013   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:01.164358   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:01.164396   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:01.245847   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:01.237260   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.238200   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.239244   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.240970   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.241435   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:01.237260   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.238200   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.239244   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.240970   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:01.241435   13418 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:01.245874   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:01.245888   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:01.278036   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:01.278066   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:01.321761   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:01.321798   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:01.349850   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:01.349877   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:01.362087   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:01.362115   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:01.406110   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:01.406143   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:01.488538   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:01.488580   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:01.524108   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:01.524314   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:04.107171   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:04.119134   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:04.119206   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:04.150892   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:04.150913   59960 cri.go:89] found id: ""
	I1126 20:14:04.150920   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:04.150993   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.154614   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:04.154713   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:04.181842   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:04.181866   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:04.181870   59960 cri.go:89] found id: ""
	I1126 20:14:04.181878   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:04.181958   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.185706   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.189884   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:04.190033   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:04.217117   59960 cri.go:89] found id: ""
	I1126 20:14:04.217143   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.217152   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:04.217159   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:04.217218   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:04.244873   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:04.244893   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:04.244897   59960 cri.go:89] found id: ""
	I1126 20:14:04.244904   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:04.244962   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.248633   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.252113   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:04.252223   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:04.281381   59960 cri.go:89] found id: ""
	I1126 20:14:04.281410   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.281420   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:04.281426   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:04.281484   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:04.309793   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:04.309817   59960 cri.go:89] found id: ""
	I1126 20:14:04.309825   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:04.309881   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:04.313555   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:04.313625   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:04.341073   59960 cri.go:89] found id: ""
	I1126 20:14:04.341100   59960 logs.go:282] 0 containers: []
	W1126 20:14:04.341109   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:04.341117   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:04.341129   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:04.436704   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:04.436741   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:04.511848   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:04.500099   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.500700   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506376   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506925   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.508357   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:04.500099   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.500700   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506376   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.506925   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:04.508357   13544 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:04.511872   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:04.511887   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:04.572587   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:04.572662   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:04.622150   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:04.622182   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:04.648129   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:04.648200   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:04.736436   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:04.736472   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:04.748750   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:04.748783   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:04.784731   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:04.784756   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:04.861032   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:04.861067   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:04.888273   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:04.888306   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:07.422077   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:07.432698   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:07.432776   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:07.463525   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:07.463545   59960 cri.go:89] found id: ""
	I1126 20:14:07.463553   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:07.463605   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.467175   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:07.467243   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:07.497801   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:07.497821   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:07.497826   59960 cri.go:89] found id: ""
	I1126 20:14:07.497833   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:07.497888   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.501759   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.505120   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:07.505198   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:07.539084   59960 cri.go:89] found id: ""
	I1126 20:14:07.539112   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.539121   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:07.539127   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:07.539189   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:07.567688   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:07.567713   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:07.567720   59960 cri.go:89] found id: ""
	I1126 20:14:07.567727   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:07.567788   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.571445   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.575895   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:07.575973   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:07.603679   59960 cri.go:89] found id: ""
	I1126 20:14:07.603704   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.603713   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:07.603720   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:07.603801   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:07.633845   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:07.633869   59960 cri.go:89] found id: ""
	I1126 20:14:07.633877   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:07.633982   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:07.638439   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:07.638510   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:07.669305   59960 cri.go:89] found id: ""
	I1126 20:14:07.669329   59960 logs.go:282] 0 containers: []
	W1126 20:14:07.669338   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:07.669348   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:07.669361   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:07.746001   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:07.746039   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:07.773829   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:07.773859   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:07.806673   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:07.806705   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:07.847992   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:07.848029   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:07.876479   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:07.876507   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:07.952982   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:07.953018   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:08.054195   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:08.054235   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:08.071790   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:08.071819   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:08.158168   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:08.148798   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.150262   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.151831   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.152401   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.154098   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:08.148798   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.150262   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.151831   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.152401   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:08.154098   13732 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:08.158237   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:08.158266   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:08.185227   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:08.185257   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:10.730401   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:10.741460   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:10.741529   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:10.774241   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:10.774263   59960 cri.go:89] found id: ""
	I1126 20:14:10.774270   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:10.774327   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.778033   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:10.778103   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:10.806991   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:10.807015   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:10.807021   59960 cri.go:89] found id: ""
	I1126 20:14:10.807028   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:10.807083   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.810846   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.814441   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:10.814513   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:10.843200   59960 cri.go:89] found id: ""
	I1126 20:14:10.843226   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.843236   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:10.843242   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:10.843301   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:10.871039   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:10.871062   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:10.871068   59960 cri.go:89] found id: ""
	I1126 20:14:10.871075   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:10.871129   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.874747   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.878577   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:10.878661   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:10.907317   59960 cri.go:89] found id: ""
	I1126 20:14:10.907343   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.907352   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:10.907359   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:10.907414   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:10.936274   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:10.936297   59960 cri.go:89] found id: ""
	I1126 20:14:10.936306   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:10.936385   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:10.939976   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:10.940048   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:10.969776   59960 cri.go:89] found id: ""
	I1126 20:14:10.969848   59960 logs.go:282] 0 containers: []
	W1126 20:14:10.969884   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:10.969911   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:10.969997   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:11.067923   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:11.067964   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:11.082749   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:11.082781   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:11.124244   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:11.124281   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:11.173196   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:11.173232   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:11.200233   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:11.200268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:11.284292   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:11.284327   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:11.317517   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:11.317545   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:11.395020   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:11.386165   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.387087   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388651   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388979   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.390832   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:11.386165   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.387087   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388651   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.388979   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:11.390832   13861 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:11.395043   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:11.395056   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:11.422025   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:11.422059   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:11.500554   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:11.500588   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.028990   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:14.043196   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:14:14.043275   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:14:14.078393   59960 cri.go:89] found id: "11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:14.078418   59960 cri.go:89] found id: ""
	I1126 20:14:14.078426   59960 logs.go:282] 1 containers: [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb]
	I1126 20:14:14.078485   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.082581   59960 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:14:14.082679   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:14:14.113586   59960 cri.go:89] found id: "217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:14.113611   59960 cri.go:89] found id: "cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:14.113616   59960 cri.go:89] found id: ""
	I1126 20:14:14.113623   59960 logs.go:282] 2 containers: [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5]
	I1126 20:14:14.113677   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.117367   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.120847   59960 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:14:14.120921   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:14:14.147191   59960 cri.go:89] found id: ""
	I1126 20:14:14.147214   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.147222   59960 logs.go:284] No container was found matching "coredns"
	I1126 20:14:14.147229   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:14:14.147287   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:14:14.173461   59960 cri.go:89] found id: "b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:14.173483   59960 cri.go:89] found id: "37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.173489   59960 cri.go:89] found id: ""
	I1126 20:14:14.173496   59960 logs.go:282] 2 containers: [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa]
	I1126 20:14:14.173560   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.177359   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.180846   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:14:14.180926   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:14:14.211699   59960 cri.go:89] found id: ""
	I1126 20:14:14.211731   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.211740   59960 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:14:14.211747   59960 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:14:14.211815   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:14:14.245320   59960 cri.go:89] found id: "8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:14.245343   59960 cri.go:89] found id: ""
	I1126 20:14:14.245352   59960 logs.go:282] 1 containers: [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529]
	I1126 20:14:14.245422   59960 ssh_runner.go:195] Run: which crictl
	I1126 20:14:14.249066   59960 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:14:14.249133   59960 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:14:14.277385   59960 cri.go:89] found id: ""
	I1126 20:14:14.277407   59960 logs.go:282] 0 containers: []
	W1126 20:14:14.277415   59960 logs.go:284] No container was found matching "kindnet"
	I1126 20:14:14.277424   59960 logs.go:123] Gathering logs for dmesg ...
	I1126 20:14:14.277436   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:14:14.289839   59960 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:14:14.289866   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:14:14.361142   59960 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1126 20:14:14.352896   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.353542   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355081   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355655   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.357173   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1126 20:14:14.352896   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.353542   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355081   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.355655   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1126 20:14:14.357173   13960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:14:14.361165   59960 logs.go:123] Gathering logs for etcd [217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46] ...
	I1126 20:14:14.361179   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 217f78028ea5b55a340a7295a694be6b7d2e91348860b3cf0757cbdb412d3a46"
	I1126 20:14:14.419666   59960 logs.go:123] Gathering logs for etcd [cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5] ...
	I1126 20:14:14.419762   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cff5f7e1c320f7a9af634d2d2805f659484d8e18bfb07991086e6e2c809a8bd5"
	I1126 20:14:14.468633   59960 logs.go:123] Gathering logs for kube-scheduler [b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1] ...
	I1126 20:14:14.468667   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b205532732084e6fcf6d1990ed8217297cd7f85a7827315dd5604d36164b03a1"
	I1126 20:14:14.557664   59960 logs.go:123] Gathering logs for kube-apiserver [11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb] ...
	I1126 20:14:14.557696   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 11026e7694e916598880b5c1784266b37a37f90c4f5d421a66f873249442f8bb"
	I1126 20:14:14.583538   59960 logs.go:123] Gathering logs for kube-scheduler [37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa] ...
	I1126 20:14:14.583567   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 37530bc45e0390fa882b999b2bbdffaef014fd68485194633fdf1722180fb2fa"
	I1126 20:14:14.612806   59960 logs.go:123] Gathering logs for kube-controller-manager [8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529] ...
	I1126 20:14:14.612834   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8d8f6896e7e1335f939a13393503138ccebf919b758ad0984c41e1cb0e9ae529"
	I1126 20:14:14.638272   59960 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:14:14.638300   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:14:14.721230   59960 logs.go:123] Gathering logs for container status ...
	I1126 20:14:14.721268   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:14:14.755109   59960 logs.go:123] Gathering logs for kubelet ...
	I1126 20:14:14.755142   59960 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:14:17.358125   59960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:14:17.371898   59960 out.go:203] 
	W1126 20:14:17.375212   59960 out.go:285] X Exiting due to K8S_APISERVER_MISSING: adding node: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1126 20:14:17.375248   59960 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1126 20:14:17.375258   59960 out.go:285] * Related issues:
	W1126 20:14:17.375279   59960 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1126 20:14:17.375299   59960 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1126 20:14:17.378409   59960 out.go:203] 
	
	
	==> CRI-O <==
	Nov 26 20:07:27 ha-278127 crio[667]: time="2025-11-26T20:07:27.974719211Z" level=info msg="Started container" PID=1450 containerID=0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee description=kube-system/kube-controller-manager-ha-278127/kube-controller-manager id=87dec93c-7b21-4bf6-943c-261f225c113f name=/runtime.v1.RuntimeService/StartContainer sandboxID=aaf24b4012ae22573565b29a9c87fa6c77cadf206a779d5e6c1de76d289f128f
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.929319714Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ec2c398f-23e5-463c-bbb1-09030f312307 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.930440903Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8fc66d00-8c37-4d25-84c6-7d7ac1c54ce3 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.932121756Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5c15308b-e98f-4109-8cbc-9192ac697f01 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.932226698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.940571173Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.940960238Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8f34edad928de60e13d64480bf036aa1cf6b11ecfb7c751ef02ef81267e506bc/merged/etc/passwd: no such file or directory"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.941066542Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f34edad928de60e13d64480bf036aa1cf6b11ecfb7c751ef02ef81267e506bc/merged/etc/group: no such file or directory"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.941381721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.959928416Z" level=info msg="Created container 1de9ee4cdf6523ba82be553073f7f95b567b3080cf0b35a8910ac6dcf51abbd5: kube-system/storage-provisioner/storage-provisioner" id=5c15308b-e98f-4109-8cbc-9192ac697f01 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.960936581Z" level=info msg="Starting container: 1de9ee4cdf6523ba82be553073f7f95b567b3080cf0b35a8910ac6dcf51abbd5" id=51eb399f-be44-48a0-a1b4-1c62267c418c name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:07:28 ha-278127 crio[667]: time="2025-11-26T20:07:28.967526563Z" level=info msg="Started container" PID=1462 containerID=1de9ee4cdf6523ba82be553073f7f95b567b3080cf0b35a8910ac6dcf51abbd5 description=kube-system/storage-provisioner/storage-provisioner id=51eb399f-be44-48a0-a1b4-1c62267c418c name=/runtime.v1.RuntimeService/StartContainer sandboxID=21dd814126bdbbb8dab349806b778ddb306dc5100a35c1bd2fe40c8004bcd523
	Nov 26 20:07:44 ha-278127 conmon[1447]: conmon 0e221d151c3ca5256368 <ninfo>: container 1450 exited with status 1
	Nov 26 20:07:45 ha-278127 crio[667]: time="2025-11-26T20:07:45.240819859Z" level=info msg="Removing container: c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9" id=6f335103-7e48-492e-b33a-d6d488e111fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:07:45 ha-278127 crio[667]: time="2025-11-26T20:07:45.256615675Z" level=info msg="Error loading conmon cgroup of container c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9: cgroup deleted" id=6f335103-7e48-492e-b33a-d6d488e111fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:07:45 ha-278127 crio[667]: time="2025-11-26T20:07:45.261280075Z" level=info msg="Removed container c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9: kube-system/kube-controller-manager-ha-278127/kube-controller-manager" id=6f335103-7e48-492e-b33a-d6d488e111fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.929977452Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c9fc5566-53be-4e3a-ad5b-047dfe5df6f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.931894512Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c6b73409-e91d-4450-8804-870ca6e0b63d name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.933188155Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-278127/kube-controller-manager" id=b5b42e4a-b813-4466-87cd-d441eaaf849b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.933308096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.94134128Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.942037763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.965749324Z" level=info msg="Created container b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca: kube-system/kube-controller-manager-ha-278127/kube-controller-manager" id=b5b42e4a-b813-4466-87cd-d441eaaf849b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.966758303Z" level=info msg="Starting container: b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca" id=d8573d49-5a20-4657-b169-a7727449cf6d name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:08:12 ha-278127 crio[667]: time="2025-11-26T20:08:12.975098568Z" level=info msg="Started container" PID=1498 containerID=b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca description=kube-system/kube-controller-manager-ha-278127/kube-controller-manager id=d8573d49-5a20-4657-b169-a7727449cf6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=aaf24b4012ae22573565b29a9c87fa6c77cadf206a779d5e6c1de76d289f128f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	b3d2b3bea3b9f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   6                   aaf24b4012ae2       kube-controller-manager-ha-278127   kube-system
	1de9ee4cdf652       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   8 minutes ago       Running             storage-provisioner       5                   21dd814126bdb       storage-provisioner                 kube-system
	0e221d151c3ca       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   5                   aaf24b4012ae2       kube-controller-manager-ha-278127   kube-system
	1a9b5dae15334       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   8 minutes ago       Exited              storage-provisioner       4                   21dd814126bdb       storage-provisioner                 kube-system
	1622dad7c067a       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   9 minutes ago       Running             kube-vip                  3                   d4cb99de55854       kube-vip-ha-278127                  kube-system
	822876229de0f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   9 minutes ago       Running             coredns                   2                   dfdbe4360041c       coredns-66bc5c9577-ndh8k            kube-system
	aef907239d286       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   9 minutes ago       Running             busybox                   2                   78d3fb27335b4       busybox-7b57f96db7-vwpd8            default
	787754735cfed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   9 minutes ago       Running             coredns                   2                   89e2c226e09e6       coredns-66bc5c9577-bbpk7            kube-system
	d140d1950675e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 minutes ago       Running             kindnet-cni               2                   b9a376ab09c3c       kindnet-gp24m                       kube-system
	7b45294efb449       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   9 minutes ago       Running             kube-proxy                2                   55fa9dab05c0d       kube-proxy-5fndw                    kube-system
	f5647f1652cc1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   9 minutes ago       Running             kube-apiserver            3                   c932fd4498a66       kube-apiserver-ha-278127            kube-system
	040a854900180       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   9 minutes ago       Running             kube-scheduler            2                   773a6356cec93       kube-scheduler-ha-278127            kube-system
	106da3c0ad4fa       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   9 minutes ago       Exited              kube-vip                  2                   d4cb99de55854       kube-vip-ha-278127                  kube-system
	cdc1651fea8f1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   9 minutes ago       Running             etcd                      2                   11d5891e684b3       etcd-ha-278127                      kube-system
	
	
	==> coredns [787754735cfed2e99ff1e0336a870da9b5e17eaed8d9d79b97dbfa75dd83059c] <==
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45898 - 29384 "HINFO IN 3170256484025904488.3791759156995599050. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014293297s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [822876229de0f6cb25db3449774153712b72a0c129090a61a1aeadc760c6cad4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53615 - 2115 "HINFO IN 6991506871979899616.8642824612935885209. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017055518s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-278127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T19_58_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:58:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:15:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:15:34 +0000   Wed, 26 Nov 2025 19:58:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:15:34 +0000   Wed, 26 Nov 2025 19:58:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:15:34 +0000   Wed, 26 Nov 2025 19:58:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:15:34 +0000   Wed, 26 Nov 2025 19:59:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-278127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                370e19a1-8269-418f-82ce-e7791d2f9cc5
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-vwpd8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-bbpk7             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 coredns-66bc5c9577-ndh8k             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-ha-278127                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-gp24m                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-278127             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-278127    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-5fndw                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-278127             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-278127                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 9m15s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     17m (x8 over 17m)      kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)      kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)      kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           17m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-278127 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)      kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           11m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   Starting                 9m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m25s (x8 over 9m26s)  kubelet          Node ha-278127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m25s (x8 over 9m26s)  kubelet          Node ha-278127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m25s (x8 over 9m26s)  kubelet          Node ha-278127 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m38s                  node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	  Normal   RegisteredNode           56s                    node-controller  Node ha-278127 event: Registered Node ha-278127 in Controller
	
	
	Name:               ha-278127-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_26T19_58_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 19:58:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:05:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 26 Nov 2025 20:05:41 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-278127-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                77d88c20-b1f3-431d-ace6-24a69c640dde
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-72bpv                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-278127-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-x82cz                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-ha-278127-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-278127-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-p4455                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-278127-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-278127-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   RegisteredNode           17m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           15m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node ha-278127-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeNotReady             12m                node-controller  Node ha-278127-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-278127-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node ha-278127-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   RegisteredNode           7m38s              node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	  Normal   NodeNotReady             6m48s              node-controller  Node ha-278127-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           56s                node-controller  Node ha-278127-m02 event: Registered Node ha-278127-m02 in Controller
	
	
	Name:               ha-278127-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_26T20_01_35_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:01:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:05:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 26 Nov 2025 20:05:38 +0000   Wed, 26 Nov 2025 20:09:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-278127-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                4949defc-dfd6-4bc6-9c78-3cb968da2b3e
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hqq6q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kindnet-qbd6w               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-proxy-d4p99            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m (x3 over 14m)  kubelet          Node ha-278127-m04 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 14m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m (x3 over 14m)  kubelet          Node ha-278127-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x3 over 14m)  kubelet          Node ha-278127-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   NodeReady                13m                kubelet          Node ha-278127-m04 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-278127-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-278127-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-278127-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   RegisteredNode           7m38s              node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	  Normal   NodeNotReady             6m48s              node-controller  Node ha-278127-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           56s                node-controller  Node ha-278127-m04 event: Registered Node ha-278127-m04 in Controller
	
	
	Name:               ha-278127-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-278127-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=ha-278127
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_26T20_15_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:15:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-278127-m05
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:15:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:15:54 +0000   Wed, 26 Nov 2025 20:15:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:15:54 +0000   Wed, 26 Nov 2025 20:15:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:15:54 +0000   Wed, 26 Nov 2025 20:15:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:15:54 +0000   Wed, 26 Nov 2025 20:15:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-278127-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                d959912d-c0c4-4be3-93de-9124534b5461
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-l9p24                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 etcd-ha-278127-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         50s
	  kube-system                 kindnet-lskzr                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-ha-278127-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-controller-manager-ha-278127-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-proxy-8jv6l                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-ha-278127-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-vip-ha-278127-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        50s   kube-proxy       
	  Normal  RegisteredNode  53s   node-controller  Node ha-278127-m05 event: Registered Node ha-278127-m05 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node ha-278127-m05 event: Registered Node ha-278127-m05 in Controller
	
	
	==> dmesg <==
	[Nov26 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014220] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507172] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032749] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.773464] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.697672] kauditd_printk_skb: 36 callbacks suppressed
	[Nov26 19:37] overlayfs: idmapped layers are currently not supported
	[  +0.074077] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov26 19:39] hrtimer: interrupt took 16123050 ns
	[Nov26 19:43] overlayfs: idmapped layers are currently not supported
	[Nov26 19:44] overlayfs: idmapped layers are currently not supported
	[Nov26 19:58] overlayfs: idmapped layers are currently not supported
	[ +33.942210] overlayfs: idmapped layers are currently not supported
	[Nov26 19:59] overlayfs: idmapped layers are currently not supported
	[Nov26 20:01] overlayfs: idmapped layers are currently not supported
	[Nov26 20:02] overlayfs: idmapped layers are currently not supported
	[Nov26 20:04] overlayfs: idmapped layers are currently not supported
	[  +3.105496] overlayfs: idmapped layers are currently not supported
	[ +37.228314] overlayfs: idmapped layers are currently not supported
	[Nov26 20:05] overlayfs: idmapped layers are currently not supported
	[Nov26 20:06] overlayfs: idmapped layers are currently not supported
	[  +3.713866] overlayfs: idmapped layers are currently not supported
	[Nov26 20:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cdc1651fea8f10bd665928dcc7bb174b74385eb06e911da9629df17c0d9d29e8] <==
	{"level":"info","ts":"2025-11-26T20:14:53.462630Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:53.488797Z","caller":"etcdserver/snapshot_merge.go:64","msg":"sent database snapshot to writer","bytes":6762496,"size":"6.8 MB"}
	{"level":"info","ts":"2025-11-26T20:14:53.507286Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b4f1ca082be894dc","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-26T20:14:53.507328Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:53.635810Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":4086,"remote-peer-id":"b4f1ca082be894dc","bytes":6771645,"size":"6.8 MB"}
	{"level":"info","ts":"2025-11-26T20:14:53.777261Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(9033535516480176766 12593026477526642892 13038424532659508444)"}
	{"level":"info","ts":"2025-11-26T20:14:53.777477Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:53.777538Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"b4f1ca082be894dc"}
	{"level":"warn","ts":"2025-11-26T20:14:53.794834Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:14:53.796380Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc","error":"EOF"}
	{"level":"info","ts":"2025-11-26T20:14:54.007036Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:54.049653Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b4f1ca082be894dc","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-11-26T20:14:54.049698Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:54.049710Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:54.077606Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"warn","ts":"2025-11-26T20:14:54.203311Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"b4f1ca082be894dc","error":"failed to write b4f1ca082be894dc on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:35908: write: broken pipe)"}
	{"level":"warn","ts":"2025-11-26T20:14:54.203400Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:54.223621Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b4f1ca082be894dc","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-26T20:14:54.223678Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:14:54.223691Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b4f1ca082be894dc"}
	{"level":"info","ts":"2025-11-26T20:15:02.767987Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-26T20:15:07.580298Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-26T20:15:23.636733Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"b4f1ca082be894dc","bytes":6771645,"size":"6.8 MB","took":"30.201610177s"}
	{"level":"warn","ts":"2025-11-26T20:15:51.913439Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.215465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:500 size:370350"}
	{"level":"info","ts":"2025-11-26T20:15:51.913501Z","caller":"traceutil/trace.go:172","msg":"trace[150333726] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:500; response_revision:3739; }","duration":"164.293191ms","start":"2025-11-26T20:15:51.749195Z","end":"2025-11-26T20:15:51.913488Z","steps":["trace[150333726] 'range keys from bolt db'  (duration: 162.977764ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:15:58 up 58 min,  0 user,  load average: 1.71, 1.37, 1.31
	Linux ha-278127 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d140d1950675ee8ccd9c84ef7a5a7da1b1e44300cc3e3a958c71e1138816061f] <==
	I1126 20:15:22.226696       1 main.go:324] Node ha-278127-m05 has CIDR [10.244.2.0/24] 
	I1126 20:15:32.226250       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:15:32.226286       1 main.go:301] handling current node
	I1126 20:15:32.226302       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:15:32.226309       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:15:32.226460       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:15:32.226474       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:15:32.226527       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1126 20:15:32.226538       1 main.go:324] Node ha-278127-m05 has CIDR [10.244.2.0/24] 
	I1126 20:15:42.226514       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:15:42.227359       1 main.go:301] handling current node
	I1126 20:15:42.227406       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:15:42.227455       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:15:42.227674       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:15:42.227962       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:15:42.228102       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1126 20:15:42.228120       1 main.go:324] Node ha-278127-m05 has CIDR [10.244.2.0/24] 
	I1126 20:15:52.226325       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1126 20:15:52.226369       1 main.go:324] Node ha-278127-m02 has CIDR [10.244.1.0/24] 
	I1126 20:15:52.226548       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1126 20:15:52.226558       1 main.go:324] Node ha-278127-m04 has CIDR [10.244.3.0/24] 
	I1126 20:15:52.226639       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1126 20:15:52.226645       1 main.go:324] Node ha-278127-m05 has CIDR [10.244.2.0/24] 
	I1126 20:15:52.226719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1126 20:15:52.226727       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f5647f1652cc11a195a49a98906391e791c3136916a5e3c249907585088fad42] <==
	{"level":"warn","ts":"2025-11-26T20:08:15.185150Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40019681e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185302Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400264b2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185460Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001969860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185569Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40023790e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185752Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a24960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.185791Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002218000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.188111Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400089eb40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":1,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.188335Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002471680/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190353Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400264b2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190396Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f503c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190413Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x40029423c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190430Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001969860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190463Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002a3b860/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190481Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4002378000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190499Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400203d2c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":4,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190513Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4000f51a40/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190529Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001a24960/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	{"level":"warn","ts":"2025-11-26T20:08:15.190727Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x400089e000/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":2,"error":"rpc error: code = Unavailable desc = etcdserver: leader changed"}
	W1126 20:08:17.152713       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1126 20:08:17.154506       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:08:17.162706       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:08:19.148616       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:08:22.296241       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:09:09.201336       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:09:09.262823       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee] <==
	I1126 20:07:29.733675       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:07:30.451982       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1126 20:07:30.452014       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:07:30.453426       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1126 20:07:30.453688       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1126 20:07:30.453871       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1126 20:07:30.453945       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1126 20:07:44.473711       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststar
thook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-kubernetes-service-cidr-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [b3d2b3bea3b9f0d42f5ec9c992ad87cad16307afa6489e152b85bea61806ecca] <==
	E1126 20:08:59.054603       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054612       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054617       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	E1126 20:08:59.054623       1 gc_controller.go:151] "Failed to get node" err="node \"ha-278127-m03\" not found" logger="pod-garbage-collector-controller" node="ha-278127-m03"
	I1126 20:08:59.075009       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mttpp"
	I1126 20:08:59.108301       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mttpp"
	I1126 20:08:59.108397       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-278127-m03"
	I1126 20:08:59.137341       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-278127-m03"
	I1126 20:08:59.137379       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-cjs7r"
	I1126 20:08:59.170242       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-cjs7r"
	I1126 20:08:59.170364       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-278127-m03"
	I1126 20:08:59.200927       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-278127-m03"
	I1126 20:08:59.201053       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-278127-m03"
	I1126 20:08:59.231029       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-278127-m03"
	I1126 20:08:59.231129       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-278127-m03"
	I1126 20:08:59.266325       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-278127-m03"
	I1126 20:08:59.266427       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-278127-m03"
	I1126 20:08:59.307467       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-278127-m03"
	I1126 20:14:09.243470       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-hqq6q"
	I1126 20:14:19.320009       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-72bpv"
	I1126 20:15:03.175366       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-278127-m05\" does not exist"
	I1126 20:15:03.207382       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-278127-m05" podCIDRs=["10.244.2.0/24"]
	I1126 20:15:04.358981       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-278127-m05"
	I1126 20:15:04.359270       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="PartialDisruption"
	I1126 20:15:49.366706       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [7b45294efb44968b6b5d7d6994b3f6f118094d33ccfb9aa9a125e9d6110f41b3] <==
	I1126 20:07:27.549779       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	I1126 20:07:27.549805       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	I1126 20:07:27.549666       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding"
	E1126 20:07:31.630334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:31.630336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:31.630470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:31.630581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:07:34.702391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:34.702403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:07:34.702509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:34.702664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:41.518262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:41.518267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:41.518397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:41.518465       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1126 20:07:41.518496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:07:52.462253       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1126 20:07:52.462312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:07:52.462400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:07:55.534388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1126 20:07:55.534401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:08:05.710253       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host"
	E1126 20:08:08.782267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2530\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1126 20:08:11.854307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2531\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:08:14.930219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-278127&resourceVersion=2538\": dial tcp 192.168.49.254:8443: connect: no route to host" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [040a8549001808f2d3fce3d4cf9f8dff272706173960c5e8004af8b1ea042e80] <==
	I1126 20:06:34.800738       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:06:39.572983       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:06:39.573028       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:06:39.573039       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:06:39.573046       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:06:39.693522       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:06:39.693624       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:06:39.703802       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:06:39.704071       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:06:39.715887       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:06:39.704092       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:06:39.816440       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1126 20:15:48.283319       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-l9p24\": pod busybox-7b57f96db7-l9p24 is already assigned to node \"ha-278127-m05\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-l9p24" node="ha-278127-m05"
	E1126 20:15:48.288301       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 1cbde006-b1ea-451e-ba5b-380c98a2782c(default/busybox-7b57f96db7-l9p24) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="default/busybox-7b57f96db7-l9p24"
	E1126 20:15:48.288437       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-l9p24\": pod busybox-7b57f96db7-l9p24 is already assigned to node \"ha-278127-m05\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-l9p24"
	I1126 20:15:48.290719       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-l9p24" node="ha-278127-m05"
	
	
	==> kubelet <==
	Nov 26 20:07:21 ha-278127 kubelet[805]: E1126 20:07:21.263300     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:23 ha-278127 kubelet[805]: E1126 20:07:23.240740     805 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-278127.187ba7448d330dec  default   2559 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-278127,UID:ha-278127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-278127 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-278127,},FirstTimestamp:2025-11-26 20:06:31 +0000 UTC,LastTimestamp:2025-11-26 20:06:32.032348366 +0000 UTC m=+0.308576049,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-278127,}"
	Nov 26 20:07:27 ha-278127 kubelet[805]: I1126 20:07:27.929241     805 scope.go:117] "RemoveContainer" containerID="c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9"
	Nov 26 20:07:28 ha-278127 kubelet[805]: I1126 20:07:28.928664     805 scope.go:117] "RemoveContainer" containerID="1a9b5dae1533404a7bf684e278d137906a4f310cb5682e61046be41540e6f32b"
	Nov 26 20:07:31 ha-278127 kubelet[805]: E1126 20:07:31.162433     805 controller.go:195] "Failed to update lease" err="Put \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:31 ha-278127 kubelet[805]: E1126 20:07:31.265440     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes ha-278127)"
	Nov 26 20:07:41 ha-278127 kubelet[805]: E1126 20:07:41.163428     805 controller.go:195] "Failed to update lease" err="Put \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:41 ha-278127 kubelet[805]: I1126 20:07:41.163974     805 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Nov 26 20:07:41 ha-278127 kubelet[805]: E1126 20:07:41.266735     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:07:41 ha-278127 kubelet[805]: E1126 20:07:41.266930     805 kubelet_node_status.go:473] "Unable to update node status" err="update node status exceeds retry count"
	Nov 26 20:07:45 ha-278127 kubelet[805]: I1126 20:07:45.237637     805 scope.go:117] "RemoveContainer" containerID="c5680f84cd871450e3f95050160c6bc383cefc96eca8fe13ef831453bb2fe8a9"
	Nov 26 20:07:45 ha-278127 kubelet[805]: I1126 20:07:45.238084     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	Nov 26 20:07:45 ha-278127 kubelet[805]: E1126 20:07:45.238254     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-278127_kube-system(5eb8d26456c3b783869be39bb80c3519)\"" pod="kube-system/kube-controller-manager-ha-278127" podUID="5eb8d26456c3b783869be39bb80c3519"
	Nov 26 20:07:47 ha-278127 kubelet[805]: I1126 20:07:47.402612     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	Nov 26 20:07:47 ha-278127 kubelet[805]: E1126 20:07:47.402814     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-278127_kube-system(5eb8d26456c3b783869be39bb80c3519)\"" pod="kube-system/kube-controller-manager-ha-278127" podUID="5eb8d26456c3b783869be39bb80c3519"
	Nov 26 20:07:49 ha-278127 kubelet[805]: E1126 20:07:49.241093     805 status_manager.go:1018] "Failed to get status for pod" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kindnet-gp24m)" podUID="4d3597e4-de22-4f29-8c58-1aaabd4a8a56" pod="kube-system/kindnet-gp24m"
	Nov 26 20:07:51 ha-278127 kubelet[805]: E1126 20:07:51.165080     805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms"
	Nov 26 20:07:57 ha-278127 kubelet[805]: E1126 20:07:57.243812     805 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ha-278127.187ba7448d32cbe5  default   2561 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-278127,UID:ha-278127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-278127 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-278127,},FirstTimestamp:2025-11-26 20:06:31 +0000 UTC,LastTimestamp:2025-11-26 20:06:32.033252015 +0000 UTC m=+0.309479698,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-278127,}"
	Nov 26 20:08:00 ha-278127 kubelet[805]: I1126 20:08:00.928844     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	Nov 26 20:08:00 ha-278127 kubelet[805]: E1126 20:08:00.929077     805 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-278127_kube-system(5eb8d26456c3b783869be39bb80c3519)\"" pod="kube-system/kube-controller-manager-ha-278127" podUID="5eb8d26456c3b783869be39bb80c3519"
	Nov 26 20:08:01 ha-278127 kubelet[805]: E1126 20:08:01.366584     805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms"
	Nov 26 20:08:01 ha-278127 kubelet[805]: E1126 20:08:01.649883     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T20:07:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recurs
iveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"ha-278127\": Patch \"https://192.168.49.2:8443/api/v1/nodes/ha-278127/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:08:11 ha-278127 kubelet[805]: E1126 20:08:11.650209     805 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ha-278127\": Get \"https://192.168.49.2:8443/api/v1/nodes/ha-278127?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Nov 26 20:08:11 ha-278127 kubelet[805]: E1126 20:08:11.768381     805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-278127?timeout=10s\": context deadline exceeded" interval="800ms"
	Nov 26 20:08:12 ha-278127 kubelet[805]: I1126 20:08:12.929036     805 scope.go:117] "RemoveContainer" containerID="0e221d151c3ca52563688e2194b1c01d8b4614a29869607958f68b96125603ee"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-278127 -n ha-278127
helpers_test.go:269: (dbg) Run:  kubectl --context ha-278127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-rcsd2
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-278127 describe pod busybox-7b57f96db7-rcsd2
helpers_test.go:290: (dbg) kubectl --context ha-278127 describe pod busybox-7b57f96db7-rcsd2:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-rcsd2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zn4mp (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-zn4mp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  101s               default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  58s (x2 over 58s)  default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  57s                default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  56s                default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  12s                default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  1s                 default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  58s (x4 over 62s)  default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  57s                default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  56s (x2 over 57s)  default-scheduler  0/4 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 1 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  1s (x2 over 12s)   default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (6.13s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.43s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-053036 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-053036 --output=json --user=testUser: exit status 80 (2.42857061s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"47283a41-2973-46b3-9380-0449b8a9465c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-053036 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"b4935cb8-71f0-4662-be07-767bc1bea1e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-26T20:17:40Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"b8eaaaf0-605b-4422-8a72-ac763c64e49d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-053036 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.96s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-053036 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-053036 --output=json --user=testUser: exit status 80 (1.955747945s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e125ed92-64a4-4592-8e41-6d7e96060394","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-053036 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"cb77d15f-c826-41b1-8f99-f276a83fca15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-26T20:17:42Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"8cd41190-187e-4048-b536-8c83ca4b3dfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-053036 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.96s)

                                                
                                    
x
+
TestPause/serial/Pause (7.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-166757 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-166757 --alsologtostderr -v=5: exit status 80 (2.506720065s)

                                                
                                                
-- stdout --
	* Pausing node pause-166757 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:43:19.551378  187476 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:43:19.552301  187476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:43:19.552338  187476 out.go:374] Setting ErrFile to fd 2...
	I1126 20:43:19.552361  187476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:43:19.552656  187476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:43:19.552967  187476 out.go:368] Setting JSON to false
	I1126 20:43:19.553015  187476 mustload.go:66] Loading cluster: pause-166757
	I1126 20:43:19.553500  187476 config.go:182] Loaded profile config "pause-166757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:43:19.554044  187476 cli_runner.go:164] Run: docker container inspect pause-166757 --format={{.State.Status}}
	I1126 20:43:19.571766  187476 host.go:66] Checking if "pause-166757" exists ...
	I1126 20:43:19.572073  187476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:43:19.671535  187476 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-26 20:43:19.660425903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:43:19.672262  187476 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-166757 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1126 20:43:19.675521  187476 out.go:179] * Pausing node pause-166757 ... 
	I1126 20:43:19.679172  187476 host.go:66] Checking if "pause-166757" exists ...
	I1126 20:43:19.679650  187476 ssh_runner.go:195] Run: systemctl --version
	I1126 20:43:19.679741  187476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:43:19.699243  187476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/pause-166757/id_rsa Username:docker}
	I1126 20:43:19.805425  187476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:43:19.819437  187476 pause.go:52] kubelet running: true
	I1126 20:43:19.819506  187476 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:43:20.077600  187476 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:43:20.077678  187476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:43:20.173496  187476 cri.go:89] found id: "ff6913ff92f7a33d5f79b7e72cde6b3145439ac3dd25b28de6bda5ca2d449f5d"
	I1126 20:43:20.173518  187476 cri.go:89] found id: "ff0a5f1227925b4bdb72055f1ac096149718cb675cab7d6d694aa06631f5ccea"
	I1126 20:43:20.173523  187476 cri.go:89] found id: "bf90263bd4f1cf3ae79640f3420e3512ddac538a4089f3d2dd281242570b18dc"
	I1126 20:43:20.173527  187476 cri.go:89] found id: "eac939c08bc98665f4bf51748fc29d22412f9ee4271d7560afcbe9d5813486ae"
	I1126 20:43:20.173531  187476 cri.go:89] found id: "8280393973d719432323cdf237acb2bda01b8dce41b8dffb5bd87ebc5d1dd828"
	I1126 20:43:20.173535  187476 cri.go:89] found id: "4f7996a732bd73b5f908a785886db88ef6214a2067d6c11b1d4e1292f31b6556"
	I1126 20:43:20.173538  187476 cri.go:89] found id: "091ca865eebb280db3b387e326ef44d9b1d136413786c299225e04fa0f4673c1"
	I1126 20:43:20.173541  187476 cri.go:89] found id: "2db020b8c32b522251976eced59d8bb3bac5adab09d141a0bf566661e506974c"
	I1126 20:43:20.173545  187476 cri.go:89] found id: "0db000c6d2320c82ec9be70d6c38cf52db881b458ac9fcbb65a9de481d9005fd"
	I1126 20:43:20.173551  187476 cri.go:89] found id: "60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95"
	I1126 20:43:20.173554  187476 cri.go:89] found id: "d3ad91d7746bb4b386071782c6f36969bb925be7fbcfcd4d33a447d23efb7975"
	I1126 20:43:20.173557  187476 cri.go:89] found id: "4dee54f7f5168459562bdac0a84ab912b1e6d20efea644ea468f645384533723"
	I1126 20:43:20.173561  187476 cri.go:89] found id: "a84e4d20f1907030703fc54a2a88bc2779dec332e6e8415d049b55a34abd0119"
	I1126 20:43:20.173564  187476 cri.go:89] found id: "6dffcf8b996742928728e2c585061644cc362bcb92cdff0791c4434cf0f2073a"
	I1126 20:43:20.173567  187476 cri.go:89] found id: ""
	I1126 20:43:20.173615  187476 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:43:20.185166  187476 retry.go:31] will retry after 308.012446ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:43:20Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:43:20.493797  187476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:43:20.506556  187476 pause.go:52] kubelet running: false
	I1126 20:43:20.506618  187476 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:43:20.665105  187476 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:43:20.665181  187476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:43:20.728734  187476 cri.go:89] found id: "ff6913ff92f7a33d5f79b7e72cde6b3145439ac3dd25b28de6bda5ca2d449f5d"
	I1126 20:43:20.728761  187476 cri.go:89] found id: "ff0a5f1227925b4bdb72055f1ac096149718cb675cab7d6d694aa06631f5ccea"
	I1126 20:43:20.728767  187476 cri.go:89] found id: "bf90263bd4f1cf3ae79640f3420e3512ddac538a4089f3d2dd281242570b18dc"
	I1126 20:43:20.728771  187476 cri.go:89] found id: "eac939c08bc98665f4bf51748fc29d22412f9ee4271d7560afcbe9d5813486ae"
	I1126 20:43:20.728774  187476 cri.go:89] found id: "8280393973d719432323cdf237acb2bda01b8dce41b8dffb5bd87ebc5d1dd828"
	I1126 20:43:20.728778  187476 cri.go:89] found id: "4f7996a732bd73b5f908a785886db88ef6214a2067d6c11b1d4e1292f31b6556"
	I1126 20:43:20.728806  187476 cri.go:89] found id: "091ca865eebb280db3b387e326ef44d9b1d136413786c299225e04fa0f4673c1"
	I1126 20:43:20.728815  187476 cri.go:89] found id: "2db020b8c32b522251976eced59d8bb3bac5adab09d141a0bf566661e506974c"
	I1126 20:43:20.728819  187476 cri.go:89] found id: "0db000c6d2320c82ec9be70d6c38cf52db881b458ac9fcbb65a9de481d9005fd"
	I1126 20:43:20.728826  187476 cri.go:89] found id: "60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95"
	I1126 20:43:20.728835  187476 cri.go:89] found id: "d3ad91d7746bb4b386071782c6f36969bb925be7fbcfcd4d33a447d23efb7975"
	I1126 20:43:20.728838  187476 cri.go:89] found id: "4dee54f7f5168459562bdac0a84ab912b1e6d20efea644ea468f645384533723"
	I1126 20:43:20.728841  187476 cri.go:89] found id: "a84e4d20f1907030703fc54a2a88bc2779dec332e6e8415d049b55a34abd0119"
	I1126 20:43:20.728844  187476 cri.go:89] found id: "6dffcf8b996742928728e2c585061644cc362bcb92cdff0791c4434cf0f2073a"
	I1126 20:43:20.728847  187476 cri.go:89] found id: ""
	I1126 20:43:20.728894  187476 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:43:20.739900  187476 retry.go:31] will retry after 287.77156ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:43:20Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:43:21.028506  187476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:43:21.041579  187476 pause.go:52] kubelet running: false
	I1126 20:43:21.041650  187476 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:43:21.195807  187476 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:43:21.195901  187476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:43:21.264125  187476 cri.go:89] found id: "ff6913ff92f7a33d5f79b7e72cde6b3145439ac3dd25b28de6bda5ca2d449f5d"
	I1126 20:43:21.264151  187476 cri.go:89] found id: "ff0a5f1227925b4bdb72055f1ac096149718cb675cab7d6d694aa06631f5ccea"
	I1126 20:43:21.264157  187476 cri.go:89] found id: "bf90263bd4f1cf3ae79640f3420e3512ddac538a4089f3d2dd281242570b18dc"
	I1126 20:43:21.264161  187476 cri.go:89] found id: "eac939c08bc98665f4bf51748fc29d22412f9ee4271d7560afcbe9d5813486ae"
	I1126 20:43:21.264164  187476 cri.go:89] found id: "8280393973d719432323cdf237acb2bda01b8dce41b8dffb5bd87ebc5d1dd828"
	I1126 20:43:21.264168  187476 cri.go:89] found id: "4f7996a732bd73b5f908a785886db88ef6214a2067d6c11b1d4e1292f31b6556"
	I1126 20:43:21.264171  187476 cri.go:89] found id: "091ca865eebb280db3b387e326ef44d9b1d136413786c299225e04fa0f4673c1"
	I1126 20:43:21.264175  187476 cri.go:89] found id: "2db020b8c32b522251976eced59d8bb3bac5adab09d141a0bf566661e506974c"
	I1126 20:43:21.264183  187476 cri.go:89] found id: "0db000c6d2320c82ec9be70d6c38cf52db881b458ac9fcbb65a9de481d9005fd"
	I1126 20:43:21.264189  187476 cri.go:89] found id: "60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95"
	I1126 20:43:21.264193  187476 cri.go:89] found id: "d3ad91d7746bb4b386071782c6f36969bb925be7fbcfcd4d33a447d23efb7975"
	I1126 20:43:21.264197  187476 cri.go:89] found id: "4dee54f7f5168459562bdac0a84ab912b1e6d20efea644ea468f645384533723"
	I1126 20:43:21.264200  187476 cri.go:89] found id: "a84e4d20f1907030703fc54a2a88bc2779dec332e6e8415d049b55a34abd0119"
	I1126 20:43:21.264210  187476 cri.go:89] found id: "6dffcf8b996742928728e2c585061644cc362bcb92cdff0791c4434cf0f2073a"
	I1126 20:43:21.264214  187476 cri.go:89] found id: ""
	I1126 20:43:21.264279  187476 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:43:21.275781  187476 retry.go:31] will retry after 470.053118ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:43:21Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:43:21.746421  187476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:43:21.760170  187476 pause.go:52] kubelet running: false
	I1126 20:43:21.760230  187476 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:43:21.900219  187476 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:43:21.900310  187476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:43:21.964370  187476 cri.go:89] found id: "ff6913ff92f7a33d5f79b7e72cde6b3145439ac3dd25b28de6bda5ca2d449f5d"
	I1126 20:43:21.964394  187476 cri.go:89] found id: "ff0a5f1227925b4bdb72055f1ac096149718cb675cab7d6d694aa06631f5ccea"
	I1126 20:43:21.964398  187476 cri.go:89] found id: "bf90263bd4f1cf3ae79640f3420e3512ddac538a4089f3d2dd281242570b18dc"
	I1126 20:43:21.964402  187476 cri.go:89] found id: "eac939c08bc98665f4bf51748fc29d22412f9ee4271d7560afcbe9d5813486ae"
	I1126 20:43:21.964408  187476 cri.go:89] found id: "8280393973d719432323cdf237acb2bda01b8dce41b8dffb5bd87ebc5d1dd828"
	I1126 20:43:21.964412  187476 cri.go:89] found id: "4f7996a732bd73b5f908a785886db88ef6214a2067d6c11b1d4e1292f31b6556"
	I1126 20:43:21.964415  187476 cri.go:89] found id: "091ca865eebb280db3b387e326ef44d9b1d136413786c299225e04fa0f4673c1"
	I1126 20:43:21.964418  187476 cri.go:89] found id: "2db020b8c32b522251976eced59d8bb3bac5adab09d141a0bf566661e506974c"
	I1126 20:43:21.964421  187476 cri.go:89] found id: "0db000c6d2320c82ec9be70d6c38cf52db881b458ac9fcbb65a9de481d9005fd"
	I1126 20:43:21.964430  187476 cri.go:89] found id: "60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95"
	I1126 20:43:21.964433  187476 cri.go:89] found id: "d3ad91d7746bb4b386071782c6f36969bb925be7fbcfcd4d33a447d23efb7975"
	I1126 20:43:21.964436  187476 cri.go:89] found id: "4dee54f7f5168459562bdac0a84ab912b1e6d20efea644ea468f645384533723"
	I1126 20:43:21.964439  187476 cri.go:89] found id: "a84e4d20f1907030703fc54a2a88bc2779dec332e6e8415d049b55a34abd0119"
	I1126 20:43:21.964442  187476 cri.go:89] found id: "6dffcf8b996742928728e2c585061644cc362bcb92cdff0791c4434cf0f2073a"
	I1126 20:43:21.964445  187476 cri.go:89] found id: ""
	I1126 20:43:21.964493  187476 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:43:21.978780  187476 out.go:203] 
	W1126 20:43:21.981766  187476 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:43:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:43:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 20:43:21.981791  187476 out.go:285] * 
	* 
	W1126 20:43:21.987561  187476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 20:43:21.990382  187476 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-166757 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-166757
helpers_test.go:243: (dbg) docker inspect pause-166757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915",
	        "Created": "2025-11-26T20:41:18.568444955Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 180793,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:41:18.657791911Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915/hostname",
	        "HostsPath": "/var/lib/docker/containers/4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915/hosts",
	        "LogPath": "/var/lib/docker/containers/4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915/4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915-json.log",
	        "Name": "/pause-166757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-166757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-166757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915",
	                "LowerDir": "/var/lib/docker/overlay2/f110d1456460e947f0b7dfca99a7906cd4b868a8fa0c5c915d04992a95b693ef-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f110d1456460e947f0b7dfca99a7906cd4b868a8fa0c5c915d04992a95b693ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f110d1456460e947f0b7dfca99a7906cd4b868a8fa0c5c915d04992a95b693ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f110d1456460e947f0b7dfca99a7906cd4b868a8fa0c5c915d04992a95b693ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-166757",
	                "Source": "/var/lib/docker/volumes/pause-166757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-166757",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-166757",
	                "name.minikube.sigs.k8s.io": "pause-166757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79a9b77d69dfcedc057c7c235cc0b8d197aa55e9ae352c7c76dd0ef3e3a863fd",
	            "SandboxKey": "/var/run/docker/netns/79a9b77d69df",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-166757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:0e:1c:51:3c:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f17286df33a471f9116019cb9202d2a12695a60509df55724323f546dd77948",
	                    "EndpointID": "6a9d59863d3c25a2b2a5cf2c55a80d64d939666026578e279141e11581cee7f1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-166757",
	                        "4222ca302309"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-166757 -n pause-166757
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-166757 -n pause-166757: exit status 2 (353.898637ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-166757 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-166757 logs -n 25: (1.611772387s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-784576 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:34 UTC │ 26 Nov 25 20:34 UTC │
	│ delete  │ -p NoKubernetes-784576                                                                                                                   │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:34 UTC │ 26 Nov 25 20:34 UTC │
	│ start   │ -p NoKubernetes-784576 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:34 UTC │ 26 Nov 25 20:35 UTC │
	│ start   │ -p missing-upgrade-701119 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-701119    │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:35 UTC │
	│ ssh     │ -p NoKubernetes-784576 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │                     │
	│ stop    │ -p NoKubernetes-784576                                                                                                                   │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:35 UTC │
	│ start   │ -p NoKubernetes-784576 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:35 UTC │
	│ ssh     │ -p NoKubernetes-784576 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │                     │
	│ delete  │ -p NoKubernetes-784576                                                                                                                   │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:35 UTC │
	│ start   │ -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-007998 │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:36 UTC │
	│ delete  │ -p missing-upgrade-701119                                                                                                                │ missing-upgrade-701119    │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:35 UTC │
	│ start   │ -p stopped-upgrade-569097 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-569097    │ jenkins │ v1.35.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:36 UTC │
	│ stop    │ -p kubernetes-upgrade-007998                                                                                                             │ kubernetes-upgrade-007998 │ jenkins │ v1.37.0 │ 26 Nov 25 20:36 UTC │ 26 Nov 25 20:36 UTC │
	│ start   │ -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-007998 │ jenkins │ v1.37.0 │ 26 Nov 25 20:36 UTC │ 26 Nov 25 20:38 UTC │
	│ stop    │ stopped-upgrade-569097 stop                                                                                                              │ stopped-upgrade-569097    │ jenkins │ v1.35.0 │ 26 Nov 25 20:36 UTC │ 26 Nov 25 20:36 UTC │
	│ start   │ -p stopped-upgrade-569097 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-569097    │ jenkins │ v1.37.0 │ 26 Nov 25 20:36 UTC │ 26 Nov 25 20:41 UTC │
	│ start   │ -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-007998 │ jenkins │ v1.37.0 │ 26 Nov 25 20:38 UTC │                     │
	│ start   │ -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-007998 │ jenkins │ v1.37.0 │ 26 Nov 25 20:38 UTC │ 26 Nov 25 20:38 UTC │
	│ delete  │ -p kubernetes-upgrade-007998                                                                                                             │ kubernetes-upgrade-007998 │ jenkins │ v1.37.0 │ 26 Nov 25 20:38 UTC │ 26 Nov 25 20:38 UTC │
	│ start   │ -p running-upgrade-215687 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-215687    │ jenkins │ v1.35.0 │ 26 Nov 25 20:38 UTC │ 26 Nov 25 20:39 UTC │
	│ start   │ -p running-upgrade-215687 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-215687    │ jenkins │ v1.37.0 │ 26 Nov 25 20:39 UTC │                     │
	│ delete  │ -p stopped-upgrade-569097                                                                                                                │ stopped-upgrade-569097    │ jenkins │ v1.37.0 │ 26 Nov 25 20:41 UTC │ 26 Nov 25 20:41 UTC │
	│ start   │ -p pause-166757 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-166757              │ jenkins │ v1.37.0 │ 26 Nov 25 20:41 UTC │ 26 Nov 25 20:42 UTC │
	│ start   │ -p pause-166757 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-166757              │ jenkins │ v1.37.0 │ 26 Nov 25 20:42 UTC │ 26 Nov 25 20:43 UTC │
	│ pause   │ -p pause-166757 --alsologtostderr -v=5                                                                                                   │ pause-166757              │ jenkins │ v1.37.0 │ 26 Nov 25 20:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:42:38
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:42:38.014542  184902 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:42:38.014702  184902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:42:38.014712  184902 out.go:374] Setting ErrFile to fd 2...
	I1126 20:42:38.014718  184902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:42:38.015014  184902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:42:38.015497  184902 out.go:368] Setting JSON to false
	I1126 20:42:38.016618  184902 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5088,"bootTime":1764184670,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:42:38.016709  184902 start.go:143] virtualization:  
	I1126 20:42:38.022093  184902 out.go:179] * [pause-166757] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:42:38.025567  184902 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:42:38.025703  184902 notify.go:221] Checking for updates...
	I1126 20:42:38.037245  184902 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:42:38.040789  184902 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:42:38.043854  184902 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:42:38.046919  184902 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:42:38.049856  184902 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:42:38.053366  184902 config.go:182] Loaded profile config "pause-166757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:42:38.054113  184902 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:42:38.092870  184902 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:42:38.093004  184902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:42:38.170380  184902 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-26 20:42:38.160587652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:42:38.170490  184902 docker.go:319] overlay module found
	I1126 20:42:38.173611  184902 out.go:179] * Using the docker driver based on existing profile
	I1126 20:42:38.176629  184902 start.go:309] selected driver: docker
	I1126 20:42:38.176647  184902 start.go:927] validating driver "docker" against &{Name:pause-166757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-166757 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:42:38.176817  184902 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:42:38.176924  184902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:42:38.231003  184902 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-26 20:42:38.22115119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:42:38.231409  184902 cni.go:84] Creating CNI manager for ""
	I1126 20:42:38.231477  184902 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:42:38.231522  184902 start.go:353] cluster config:
	{Name:pause-166757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-166757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:42:38.236496  184902 out.go:179] * Starting "pause-166757" primary control-plane node in "pause-166757" cluster
	I1126 20:42:38.239424  184902 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:42:38.242295  184902 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:42:38.245135  184902 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:42:38.245150  184902 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:42:38.245187  184902 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:42:38.245196  184902 cache.go:65] Caching tarball of preloaded images
	I1126 20:42:38.245262  184902 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:42:38.245271  184902 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:42:38.245408  184902 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/config.json ...
	I1126 20:42:38.267624  184902 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:42:38.267649  184902 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:42:38.267667  184902 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:42:38.267696  184902 start.go:360] acquireMachinesLock for pause-166757: {Name:mk5f9cf6d34bb8aea4563d0f7759f0f2253ef309 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:42:38.267763  184902 start.go:364] duration metric: took 41.918µs to acquireMachinesLock for "pause-166757"
	I1126 20:42:38.267789  184902 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:42:38.267797  184902 fix.go:54] fixHost starting: 
	I1126 20:42:38.268052  184902 cli_runner.go:164] Run: docker container inspect pause-166757 --format={{.State.Status}}
	I1126 20:42:38.284471  184902 fix.go:112] recreateIfNeeded on pause-166757: state=Running err=<nil>
	W1126 20:42:38.284505  184902 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:42:39.940371  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:42:39.940840  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:42:39.940884  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:42:39.940939  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:42:39.976125  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:39.976149  174302 cri.go:89] found id: ""
	I1126 20:42:39.976157  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:42:39.976212  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:39.979603  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:42:39.979673  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:42:40.043227  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:40.043247  174302 cri.go:89] found id: ""
	I1126 20:42:40.043255  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:42:40.043327  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:40.059506  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:42:40.059679  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:42:40.124195  174302 cri.go:89] found id: ""
	I1126 20:42:40.124219  174302 logs.go:282] 0 containers: []
	W1126 20:42:40.124228  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:42:40.124235  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:42:40.124300  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:42:40.167237  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:40.167315  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:40.167389  174302 cri.go:89] found id: ""
	I1126 20:42:40.167416  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:42:40.167510  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:40.171741  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:40.175767  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:42:40.175869  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:42:40.216219  174302 cri.go:89] found id: ""
	I1126 20:42:40.216254  174302 logs.go:282] 0 containers: []
	W1126 20:42:40.216263  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:42:40.216270  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:42:40.216354  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:42:40.254543  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:40.254570  174302 cri.go:89] found id: ""
	I1126 20:42:40.254578  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:42:40.254635  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:40.258921  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:42:40.259023  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:42:40.295677  174302 cri.go:89] found id: ""
	I1126 20:42:40.295701  174302 logs.go:282] 0 containers: []
	W1126 20:42:40.295712  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:42:40.295719  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:42:40.295781  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:42:40.333228  174302 cri.go:89] found id: ""
	I1126 20:42:40.333253  174302 logs.go:282] 0 containers: []
	W1126 20:42:40.333261  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:42:40.333276  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:42:40.333288  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:42:40.349299  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:42:40.349330  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:42:40.421611  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:42:40.421632  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:42:40.421645  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:40.471550  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:42:40.471581  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:40.561329  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:42:40.561362  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:40.597842  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:42:40.597871  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:42:40.673232  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:42:40.673265  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:42:40.794627  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:42:40.794661  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:40.838930  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:42:40.838961  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:40.876023  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:42:40.876089  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:42:38.287820  184902 out.go:252] * Updating the running docker "pause-166757" container ...
	I1126 20:42:38.287854  184902 machine.go:94] provisionDockerMachine start ...
	I1126 20:42:38.287939  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:38.304915  184902 main.go:143] libmachine: Using SSH client type: native
	I1126 20:42:38.305302  184902 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1126 20:42:38.305319  184902 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:42:38.453616  184902 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-166757
	
	I1126 20:42:38.453642  184902 ubuntu.go:182] provisioning hostname "pause-166757"
	I1126 20:42:38.453701  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:38.472187  184902 main.go:143] libmachine: Using SSH client type: native
	I1126 20:42:38.472504  184902 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1126 20:42:38.472523  184902 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-166757 && echo "pause-166757" | sudo tee /etc/hostname
	I1126 20:42:38.635978  184902 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-166757
	
	I1126 20:42:38.636066  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:38.654894  184902 main.go:143] libmachine: Using SSH client type: native
	I1126 20:42:38.655209  184902 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1126 20:42:38.655238  184902 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-166757' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-166757/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-166757' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:42:38.802201  184902 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:42:38.802251  184902 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:42:38.802282  184902 ubuntu.go:190] setting up certificates
	I1126 20:42:38.802300  184902 provision.go:84] configureAuth start
	I1126 20:42:38.802362  184902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-166757
	I1126 20:42:38.820026  184902 provision.go:143] copyHostCerts
	I1126 20:42:38.820099  184902 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:42:38.820113  184902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:42:38.820188  184902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:42:38.820302  184902 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:42:38.820313  184902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:42:38.820340  184902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:42:38.820446  184902 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:42:38.820457  184902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:42:38.820486  184902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:42:38.820551  184902 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.pause-166757 san=[127.0.0.1 192.168.85.2 localhost minikube pause-166757]
	I1126 20:42:38.928735  184902 provision.go:177] copyRemoteCerts
	I1126 20:42:38.928799  184902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:42:38.928841  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:38.946198  184902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/pause-166757/id_rsa Username:docker}
	I1126 20:42:39.049794  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:42:39.068338  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 20:42:39.085395  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:42:39.102978  184902 provision.go:87] duration metric: took 300.650312ms to configureAuth
	I1126 20:42:39.103003  184902 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:42:39.103229  184902 config.go:182] Loaded profile config "pause-166757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:42:39.103344  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:39.121407  184902 main.go:143] libmachine: Using SSH client type: native
	I1126 20:42:39.121728  184902 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1126 20:42:39.121745  184902 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:42:43.425698  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:42:43.426210  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:42:43.426268  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:42:43.426329  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:42:43.463525  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:43.463549  174302 cri.go:89] found id: ""
	I1126 20:42:43.463557  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:42:43.463623  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:43.467208  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:42:43.467309  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:42:43.506671  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:43.506694  174302 cri.go:89] found id: ""
	I1126 20:42:43.506705  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:42:43.506799  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:43.510413  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:42:43.510487  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:42:43.547424  174302 cri.go:89] found id: ""
	I1126 20:42:43.547497  174302 logs.go:282] 0 containers: []
	W1126 20:42:43.547521  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:42:43.547540  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:42:43.547628  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:42:43.587975  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:43.587999  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:43.588004  174302 cri.go:89] found id: ""
	I1126 20:42:43.588011  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:42:43.588068  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:43.591892  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:43.595580  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:42:43.595676  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:42:43.637077  174302 cri.go:89] found id: ""
	I1126 20:42:43.637103  174302 logs.go:282] 0 containers: []
	W1126 20:42:43.637112  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:42:43.637118  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:42:43.637175  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:42:43.675581  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:43.675604  174302 cri.go:89] found id: ""
	I1126 20:42:43.675612  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:42:43.675691  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:43.679639  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:42:43.679728  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:42:43.714936  174302 cri.go:89] found id: ""
	I1126 20:42:43.715003  174302 logs.go:282] 0 containers: []
	W1126 20:42:43.715017  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:42:43.715024  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:42:43.715092  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:42:43.752756  174302 cri.go:89] found id: ""
	I1126 20:42:43.752782  174302 logs.go:282] 0 containers: []
	W1126 20:42:43.752791  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:42:43.752807  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:42:43.752819  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:42:43.874035  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:42:43.874073  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:42:43.941525  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:42:43.941592  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:42:43.941629  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:43.984068  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:42:43.984100  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:44.032026  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:42:44.032058  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:44.069961  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:42:44.069990  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:42:44.142413  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:42:44.142447  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:42:44.157944  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:42:44.157970  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:44.246288  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:42:44.246323  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:44.285867  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:42:44.285896  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:42:46.853992  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:42:46.854434  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:42:46.854484  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:42:46.854543  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:42:46.891485  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:46.891507  174302 cri.go:89] found id: ""
	I1126 20:42:46.891515  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:42:46.891569  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:46.894997  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:42:46.895074  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:42:46.934842  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:46.934866  174302 cri.go:89] found id: ""
	I1126 20:42:46.934874  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:42:46.934929  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:46.938487  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:42:46.938564  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:42:46.995352  174302 cri.go:89] found id: ""
	I1126 20:42:46.995377  174302 logs.go:282] 0 containers: []
	W1126 20:42:46.995385  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:42:46.995392  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:42:46.995449  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:42:47.033514  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:47.033536  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:47.033541  174302 cri.go:89] found id: ""
	I1126 20:42:47.033549  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:42:47.033605  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:47.037996  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:47.041289  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:42:47.041367  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:42:47.080358  174302 cri.go:89] found id: ""
	I1126 20:42:47.080382  174302 logs.go:282] 0 containers: []
	W1126 20:42:47.080392  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:42:47.080398  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:42:47.080453  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:42:47.118826  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:47.118847  174302 cri.go:89] found id: ""
	I1126 20:42:47.118855  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:42:47.118908  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:47.122707  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:42:47.122778  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:42:47.167756  174302 cri.go:89] found id: ""
	I1126 20:42:47.167779  174302 logs.go:282] 0 containers: []
	W1126 20:42:47.167787  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:42:47.167794  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:42:47.167850  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:42:47.203349  174302 cri.go:89] found id: ""
	I1126 20:42:47.203371  174302 logs.go:282] 0 containers: []
	W1126 20:42:47.203379  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:42:47.203393  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:42:47.203412  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:42:47.334693  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:42:47.334725  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:42:47.350516  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:42:47.350541  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:47.397393  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:42:47.397420  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:47.440372  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:42:47.440400  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:47.476444  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:42:47.476470  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:47.511538  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:42:47.511563  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:42:47.554136  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:42:47.554166  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:42:47.628857  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:42:47.628879  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:42:47.628892  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:47.715814  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:42:47.715848  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:42:44.500837  184902 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:42:44.500860  184902 machine.go:97] duration metric: took 6.21299805s to provisionDockerMachine
	I1126 20:42:44.500872  184902 start.go:293] postStartSetup for "pause-166757" (driver="docker")
	I1126 20:42:44.500883  184902 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:42:44.500941  184902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:42:44.501002  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:44.518783  184902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/pause-166757/id_rsa Username:docker}
	I1126 20:42:44.625311  184902 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:42:44.628946  184902 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:42:44.628984  184902 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:42:44.628996  184902 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:42:44.629051  184902 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:42:44.629139  184902 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:42:44.629241  184902 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:42:44.636861  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:42:44.654948  184902 start.go:296] duration metric: took 154.060756ms for postStartSetup
	I1126 20:42:44.655028  184902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:42:44.655070  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:44.672310  184902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/pause-166757/id_rsa Username:docker}
	I1126 20:42:44.775202  184902 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:42:44.780264  184902 fix.go:56] duration metric: took 6.512458303s for fixHost
	I1126 20:42:44.780290  184902 start.go:83] releasing machines lock for "pause-166757", held for 6.512511084s
	I1126 20:42:44.780372  184902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-166757
	I1126 20:42:44.797099  184902 ssh_runner.go:195] Run: cat /version.json
	I1126 20:42:44.797165  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:44.797413  184902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:42:44.797471  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:44.822183  184902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/pause-166757/id_rsa Username:docker}
	I1126 20:42:44.826078  184902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/pause-166757/id_rsa Username:docker}
	I1126 20:42:45.044740  184902 ssh_runner.go:195] Run: systemctl --version
	I1126 20:42:45.054711  184902 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:42:45.124906  184902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:42:45.132403  184902 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:42:45.132494  184902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:42:45.143102  184902 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:42:45.143137  184902 start.go:496] detecting cgroup driver to use...
	I1126 20:42:45.143176  184902 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:42:45.143249  184902 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:42:45.176799  184902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:42:45.224328  184902 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:42:45.224442  184902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:42:45.261348  184902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:42:45.322071  184902 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:42:45.576340  184902 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:42:45.875313  184902 docker.go:234] disabling docker service ...
	I1126 20:42:45.875389  184902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:42:45.893977  184902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:42:45.908275  184902 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:42:46.111439  184902 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:42:46.340644  184902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:42:46.357492  184902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:42:46.375743  184902 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:42:46.375833  184902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.384594  184902 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:42:46.384667  184902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.396443  184902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.408363  184902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.420304  184902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:42:46.431920  184902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.453322  184902 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.464678  184902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.479097  184902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:42:46.487446  184902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:42:46.495542  184902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:42:46.728163  184902 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:42:50.289260  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:42:50.289713  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:42:50.289764  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:42:50.289830  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:42:50.331413  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:50.331435  174302 cri.go:89] found id: ""
	I1126 20:42:50.331444  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:42:50.331502  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:50.335102  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:42:50.335171  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:42:50.371490  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:50.371518  174302 cri.go:89] found id: ""
	I1126 20:42:50.371526  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:42:50.371581  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:50.375177  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:42:50.375297  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:42:50.413778  174302 cri.go:89] found id: ""
	I1126 20:42:50.413805  174302 logs.go:282] 0 containers: []
	W1126 20:42:50.413815  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:42:50.413821  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:42:50.413880  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:42:50.454411  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:50.454435  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:50.454440  174302 cri.go:89] found id: ""
	I1126 20:42:50.454447  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:42:50.454510  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:50.458064  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:50.461559  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:42:50.461651  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:42:50.503213  174302 cri.go:89] found id: ""
	I1126 20:42:50.503249  174302 logs.go:282] 0 containers: []
	W1126 20:42:50.503259  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:42:50.503265  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:42:50.503325  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:42:50.540076  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:50.540097  174302 cri.go:89] found id: ""
	I1126 20:42:50.540106  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:42:50.540161  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:50.543698  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:42:50.543773  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:42:50.579770  174302 cri.go:89] found id: ""
	I1126 20:42:50.579796  174302 logs.go:282] 0 containers: []
	W1126 20:42:50.579805  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:42:50.579812  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:42:50.579868  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:42:50.619971  174302 cri.go:89] found id: ""
	I1126 20:42:50.620004  174302 logs.go:282] 0 containers: []
	W1126 20:42:50.620014  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:42:50.620027  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:42:50.620039  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:42:50.742264  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:42:50.742296  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:42:50.758754  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:42:50.758784  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:42:50.839315  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:42:50.839376  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:42:50.839396  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:50.885299  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:42:50.885330  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:42:50.960239  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:42:50.960272  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:42:50.999930  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:42:50.999957  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:51.043310  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:42:51.043338  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:51.137429  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:42:51.137470  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:51.177553  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:42:51.177585  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:56.029706  184902 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.301506325s)
	I1126 20:42:56.029733  184902 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:42:56.029786  184902 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:42:56.034148  184902 start.go:564] Will wait 60s for crictl version
	I1126 20:42:56.034225  184902 ssh_runner.go:195] Run: which crictl
	I1126 20:42:56.038226  184902 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:42:56.069598  184902 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:42:56.069686  184902 ssh_runner.go:195] Run: crio --version
	I1126 20:42:56.100898  184902 ssh_runner.go:195] Run: crio --version
	I1126 20:42:56.143165  184902 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:42:56.146269  184902 cli_runner.go:164] Run: docker network inspect pause-166757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:42:56.162713  184902 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:42:56.166643  184902 kubeadm.go:884] updating cluster {Name:pause-166757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-166757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:42:56.166788  184902 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:42:56.166853  184902 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:42:56.203751  184902 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:42:56.203779  184902 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:42:56.203837  184902 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:42:56.228422  184902 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:42:56.228449  184902 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:42:56.228456  184902 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1126 20:42:56.228558  184902 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-166757 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-166757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:42:56.228640  184902 ssh_runner.go:195] Run: crio config
	I1126 20:42:56.287060  184902 cni.go:84] Creating CNI manager for ""
	I1126 20:42:56.287081  184902 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:42:56.287105  184902 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:42:56.287132  184902 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-166757 NodeName:pause-166757 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:42:56.287269  184902 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-166757"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:42:56.287341  184902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:42:56.295131  184902 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:42:56.295208  184902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:42:56.302528  184902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1126 20:42:56.315798  184902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:42:56.328809  184902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1126 20:42:56.341587  184902 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:42:56.345243  184902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:42:56.496566  184902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:42:56.509656  184902 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757 for IP: 192.168.85.2
	I1126 20:42:56.509679  184902 certs.go:195] generating shared ca certs ...
	I1126 20:42:56.509694  184902 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:42:56.509860  184902 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:42:56.509969  184902 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:42:56.509990  184902 certs.go:257] generating profile certs ...
	I1126 20:42:56.510099  184902 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/client.key
	I1126 20:42:56.510169  184902 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/apiserver.key.edbe23e7
	I1126 20:42:56.510214  184902 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/proxy-client.key
	I1126 20:42:56.510325  184902 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:42:56.510373  184902 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:42:56.510387  184902 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:42:56.510416  184902 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:42:56.510457  184902 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:42:56.510488  184902 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:42:56.510543  184902 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:42:56.511296  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:42:56.531516  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:42:56.548938  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:42:56.567492  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:42:56.584319  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 20:42:56.601943  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:42:56.628933  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:42:56.647930  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:42:56.666458  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:42:56.684916  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:42:56.702718  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:42:56.721215  184902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:42:56.734393  184902 ssh_runner.go:195] Run: openssl version
	I1126 20:42:56.740640  184902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:42:56.749059  184902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:42:56.752658  184902 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:42:56.752748  184902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:42:56.794723  184902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:42:56.802922  184902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:42:56.811190  184902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:42:56.814824  184902 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:42:56.814928  184902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:42:56.855831  184902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:42:56.863628  184902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:42:56.871733  184902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:42:56.875319  184902 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:42:56.875435  184902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:42:56.916262  184902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:42:56.924249  184902 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:42:56.928508  184902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:42:56.971627  184902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:42:57.015286  184902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:42:57.057508  184902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:42:57.098601  184902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:42:57.139597  184902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:42:57.181480  184902 kubeadm.go:401] StartCluster: {Name:pause-166757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-166757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:42:57.181637  184902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:42:57.181704  184902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:42:57.225890  184902 cri.go:89] found id: "2db020b8c32b522251976eced59d8bb3bac5adab09d141a0bf566661e506974c"
	I1126 20:42:57.225909  184902 cri.go:89] found id: "0db000c6d2320c82ec9be70d6c38cf52db881b458ac9fcbb65a9de481d9005fd"
	I1126 20:42:57.225913  184902 cri.go:89] found id: "60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95"
	I1126 20:42:57.225916  184902 cri.go:89] found id: "d3ad91d7746bb4b386071782c6f36969bb925be7fbcfcd4d33a447d23efb7975"
	I1126 20:42:57.225944  184902 cri.go:89] found id: "4dee54f7f5168459562bdac0a84ab912b1e6d20efea644ea468f645384533723"
	I1126 20:42:57.225948  184902 cri.go:89] found id: "a84e4d20f1907030703fc54a2a88bc2779dec332e6e8415d049b55a34abd0119"
	I1126 20:42:57.225951  184902 cri.go:89] found id: "6dffcf8b996742928728e2c585061644cc362bcb92cdff0791c4434cf0f2073a"
	I1126 20:42:57.225954  184902 cri.go:89] found id: "6f73d60362531c85177302c22f2f1558a8f9f96309baa3cca8ee2a994661c583"
	I1126 20:42:57.225957  184902 cri.go:89] found id: "97381f7b321c19f78df8e35bcd215fb879395945793d05255aa19eedfec476e0"
	I1126 20:42:57.225965  184902 cri.go:89] found id: "c11d4d76b5030322394f2928ebbca2cdde33bb90f61362d7dee70fa18b14711d"
	I1126 20:42:57.225969  184902 cri.go:89] found id: "145cb6afa55034a23db4a9ad4ef5f1ae8d82b6d44e24936232513aa2bf8ae758"
	I1126 20:42:57.225972  184902 cri.go:89] found id: "a7d9dc021a8d9179b9b73a643682a1364e4deea5cdd586389fefaa57bd0bf601"
	I1126 20:42:57.225976  184902 cri.go:89] found id: "5358710efec2a46ce31c272e0d7f8949694cd7300a389f2e5ef3016fa8458d3b"
	I1126 20:42:57.225983  184902 cri.go:89] found id: "db11bad774b4a4bfedcd139e4ff4e88d55fb014c71e7cc7cc2dd585051987b3a"
	I1126 20:42:57.225987  184902 cri.go:89] found id: ""
	I1126 20:42:57.226037  184902 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:42:57.240337  184902 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:42:57Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:42:57.240421  184902 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:42:57.251325  184902 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:42:57.251350  184902 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:42:57.251402  184902 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:42:57.259349  184902 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:42:57.259966  184902 kubeconfig.go:125] found "pause-166757" server: "https://192.168.85.2:8443"
	I1126 20:42:57.260754  184902 kapi.go:59] client config for pause-166757: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:42:57.261233  184902 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1126 20:42:57.261256  184902 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1126 20:42:57.261268  184902 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1126 20:42:57.261273  184902 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1126 20:42:57.261277  184902 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1126 20:42:57.261538  184902 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:42:57.284656  184902 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1126 20:42:57.284726  184902 kubeadm.go:602] duration metric: took 33.369518ms to restartPrimaryControlPlane
	I1126 20:42:57.284750  184902 kubeadm.go:403] duration metric: took 103.278284ms to StartCluster
	I1126 20:42:57.284801  184902 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:42:57.284910  184902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:42:57.286078  184902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:42:57.286378  184902 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:42:57.286791  184902 config.go:182] Loaded profile config "pause-166757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:42:57.287038  184902 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:42:57.291494  184902 out.go:179] * Verifying Kubernetes components...
	I1126 20:42:57.291589  184902 out.go:179] * Enabled addons: 
	I1126 20:42:53.720285  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:42:53.720760  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:42:53.720812  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:42:53.720877  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:42:53.757361  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:53.757382  174302 cri.go:89] found id: ""
	I1126 20:42:53.757390  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:42:53.757454  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:53.760881  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:42:53.760949  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:42:53.802367  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:53.802399  174302 cri.go:89] found id: ""
	I1126 20:42:53.802408  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:42:53.802465  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:53.806128  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:42:53.806223  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:42:53.844172  174302 cri.go:89] found id: ""
	I1126 20:42:53.844200  174302 logs.go:282] 0 containers: []
	W1126 20:42:53.844209  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:42:53.844215  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:42:53.844275  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:42:53.886213  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:53.886232  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:53.886238  174302 cri.go:89] found id: ""
	I1126 20:42:53.886245  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:42:53.886300  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:53.889899  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:53.893465  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:42:53.893536  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:42:53.930218  174302 cri.go:89] found id: ""
	I1126 20:42:53.930239  174302 logs.go:282] 0 containers: []
	W1126 20:42:53.930247  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:42:53.930254  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:42:53.930310  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:42:53.966731  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:53.966753  174302 cri.go:89] found id: ""
	I1126 20:42:53.966761  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:42:53.966822  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:53.970291  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:42:53.970362  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:42:54.013498  174302 cri.go:89] found id: ""
	I1126 20:42:54.013525  174302 logs.go:282] 0 containers: []
	W1126 20:42:54.013535  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:42:54.013544  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:42:54.014189  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:42:54.052950  174302 cri.go:89] found id: ""
	I1126 20:42:54.052977  174302 logs.go:282] 0 containers: []
	W1126 20:42:54.052986  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:42:54.052999  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:42:54.053011  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:54.096941  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:42:54.096969  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:54.145283  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:42:54.145315  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:54.181898  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:42:54.181949  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:42:54.223107  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:42:54.223137  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:42:54.358343  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:42:54.358389  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:42:54.375057  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:42:54.375094  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:42:54.449955  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:42:54.450020  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:42:54.450041  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:54.555317  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:42:54.555352  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:54.592201  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:42:54.592229  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:42:57.166230  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:42:57.166658  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:42:57.166701  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:42:57.166755  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:42:57.212890  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:57.212913  174302 cri.go:89] found id: ""
	I1126 20:42:57.212922  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:42:57.212976  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:57.217238  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:42:57.217318  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:42:57.280838  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:57.280861  174302 cri.go:89] found id: ""
	I1126 20:42:57.280868  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:42:57.280921  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:57.285007  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:42:57.285069  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:42:57.328290  174302 cri.go:89] found id: ""
	I1126 20:42:57.328314  174302 logs.go:282] 0 containers: []
	W1126 20:42:57.328323  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:42:57.328329  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:42:57.328388  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:42:57.389340  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:57.389362  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:57.389367  174302 cri.go:89] found id: ""
	I1126 20:42:57.389373  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:42:57.389429  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:57.394208  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:57.397814  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:42:57.397884  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:42:57.452645  174302 cri.go:89] found id: ""
	I1126 20:42:57.452670  174302 logs.go:282] 0 containers: []
	W1126 20:42:57.452679  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:42:57.452685  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:42:57.452742  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:42:57.507191  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:57.507214  174302 cri.go:89] found id: ""
	I1126 20:42:57.507222  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:42:57.507286  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:57.513611  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:42:57.513695  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:42:57.564709  174302 cri.go:89] found id: ""
	I1126 20:42:57.564731  174302 logs.go:282] 0 containers: []
	W1126 20:42:57.564739  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:42:57.564747  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:42:57.564803  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:42:57.628386  174302 cri.go:89] found id: ""
	I1126 20:42:57.628410  174302 logs.go:282] 0 containers: []
	W1126 20:42:57.628419  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:42:57.628433  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:42:57.628444  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:42:57.805082  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:42:57.805116  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:42:57.295164  184902 addons.go:530] duration metric: took 8.106743ms for enable addons: enabled=[]
	I1126 20:42:57.295301  184902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:42:57.479092  184902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:42:57.497286  184902 node_ready.go:35] waiting up to 6m0s for node "pause-166757" to be "Ready" ...
	I1126 20:42:57.833447  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:42:57.833475  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:42:57.980055  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:42:57.980074  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:42:57.980087  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:58.044843  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:42:58.044874  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:58.137541  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:42:58.137572  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:58.209732  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:42:58.209764  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:58.280534  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:42:58.280562  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:42:58.379553  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:42:58.379591  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:58.520646  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:42:58.520682  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:01.107107  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:43:01.107484  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:43:01.107532  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:43:01.107588  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:43:01.171505  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:01.171528  174302 cri.go:89] found id: ""
	I1126 20:43:01.171537  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:43:01.171591  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:01.175292  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:43:01.175355  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:43:01.236742  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:01.236766  174302 cri.go:89] found id: ""
	I1126 20:43:01.236774  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:43:01.236833  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:01.240521  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:43:01.240593  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:43:01.298912  174302 cri.go:89] found id: ""
	I1126 20:43:01.298937  174302 logs.go:282] 0 containers: []
	W1126 20:43:01.298946  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:43:01.298951  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:43:01.299007  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:43:01.377570  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:01.377593  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:01.377603  174302 cri.go:89] found id: ""
	I1126 20:43:01.377610  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:43:01.377667  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:01.381411  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:01.385771  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:43:01.385841  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:43:01.450707  174302 cri.go:89] found id: ""
	I1126 20:43:01.450731  174302 logs.go:282] 0 containers: []
	W1126 20:43:01.450739  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:43:01.450745  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:43:01.450802  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:43:01.529380  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:01.529435  174302 cri.go:89] found id: ""
	I1126 20:43:01.529444  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:43:01.529523  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:01.534324  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:43:01.534404  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:43:01.635643  174302 cri.go:89] found id: ""
	I1126 20:43:01.635668  174302 logs.go:282] 0 containers: []
	W1126 20:43:01.635678  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:43:01.635684  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:43:01.635748  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:43:01.694671  174302 cri.go:89] found id: ""
	I1126 20:43:01.694696  174302 logs.go:282] 0 containers: []
	W1126 20:43:01.694705  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:43:01.694720  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:43:01.694732  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:43:01.820274  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:43:01.820299  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:43:01.820318  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:01.921700  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:43:01.921736  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:02.066016  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:43:02.066051  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:02.138225  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:43:02.138299  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:43:02.219876  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:43:02.219959  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:43:02.358191  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:43:02.358267  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:43:02.375546  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:43:02.375617  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:02.424582  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:43:02.424754  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:02.477787  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:43:02.477855  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:02.901097  184902 node_ready.go:49] node "pause-166757" is "Ready"
	I1126 20:43:02.901125  184902 node_ready.go:38] duration metric: took 5.403810504s for node "pause-166757" to be "Ready" ...
	I1126 20:43:02.901139  184902 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:43:02.901194  184902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:43:02.919440  184902 api_server.go:72] duration metric: took 5.633001938s to wait for apiserver process to appear ...
	I1126 20:43:02.919463  184902 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:43:02.919481  184902 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:43:02.982654  184902 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:43:02.982684  184902 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:43:05.082394  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:43:05.082873  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:43:05.082924  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:43:05.082994  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:43:05.119473  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:05.119495  174302 cri.go:89] found id: ""
	I1126 20:43:05.119503  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:43:05.119559  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:05.123196  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:43:05.123272  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:43:05.162505  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:05.162523  174302 cri.go:89] found id: ""
	I1126 20:43:05.162530  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:43:05.162589  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:05.166115  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:43:05.166188  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:43:05.203289  174302 cri.go:89] found id: ""
	I1126 20:43:05.203313  174302 logs.go:282] 0 containers: []
	W1126 20:43:05.203322  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:43:05.203330  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:43:05.203386  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:43:05.244301  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:05.244323  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:05.244328  174302 cri.go:89] found id: ""
	I1126 20:43:05.244335  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:43:05.244390  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:05.248408  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:05.251623  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:43:05.251688  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:43:05.290984  174302 cri.go:89] found id: ""
	I1126 20:43:05.291004  174302 logs.go:282] 0 containers: []
	W1126 20:43:05.291012  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:43:05.291019  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:43:05.291083  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:43:05.336638  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:05.336707  174302 cri.go:89] found id: ""
	I1126 20:43:05.336730  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:43:05.336806  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:05.340536  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:43:05.340605  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:43:05.383695  174302 cri.go:89] found id: ""
	I1126 20:43:05.383771  174302 logs.go:282] 0 containers: []
	W1126 20:43:05.383788  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:43:05.383795  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:43:05.383854  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:43:05.422989  174302 cri.go:89] found id: ""
	I1126 20:43:05.423011  174302 logs.go:282] 0 containers: []
	W1126 20:43:05.423020  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:43:05.423033  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:43:05.423046  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:05.467648  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:43:05.467677  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:05.560258  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:43:05.560299  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:05.615215  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:43:05.615291  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:43:05.633340  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:43:05.633368  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:05.691795  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:43:05.691825  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:05.739736  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:43:05.739766  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:05.775317  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:43:05.775343  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:43:05.846454  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:43:05.846485  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:43:05.974670  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:43:05.974707  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:43:06.050421  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:43:03.420003  184902 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:43:03.429479  184902 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:43:03.429506  184902 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:43:03.920165  184902 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:43:03.928468  184902 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:43:03.928539  184902 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:43:04.420322  184902 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:43:04.428545  184902 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1126 20:43:04.429663  184902 api_server.go:141] control plane version: v1.34.1
	I1126 20:43:04.429721  184902 api_server.go:131] duration metric: took 1.510250692s to wait for apiserver health ...
	I1126 20:43:04.429737  184902 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:43:04.433108  184902 system_pods.go:59] 7 kube-system pods found
	I1126 20:43:04.433151  184902 system_pods.go:61] "coredns-66bc5c9577-f8dk5" [1e650291-05a3-45a5-9886-938e718690d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:43:04.433160  184902 system_pods.go:61] "etcd-pause-166757" [1d89bc54-cd9f-4b6d-a8dd-859d96a1c436] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:43:04.433172  184902 system_pods.go:61] "kindnet-bdwwv" [f354cff5-9bb8-4013-9902-e4e72447beca] Running
	I1126 20:43:04.433178  184902 system_pods.go:61] "kube-apiserver-pause-166757" [e5703723-3967-4fb2-a8fd-83cdf9aeef3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:43:04.433183  184902 system_pods.go:61] "kube-controller-manager-pause-166757" [59567e7b-f221-4488-98dd-02435a3fd7e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:43:04.433190  184902 system_pods.go:61] "kube-proxy-tlg46" [0c1d444f-b32a-44c7-a1eb-ed3e962ba28f] Running
	I1126 20:43:04.433195  184902 system_pods.go:61] "kube-scheduler-pause-166757" [8f0ec421-1e52-447d-8235-08a1f90674a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:43:04.433205  184902 system_pods.go:74] duration metric: took 3.462839ms to wait for pod list to return data ...
	I1126 20:43:04.433217  184902 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:43:04.435916  184902 default_sa.go:45] found service account: "default"
	I1126 20:43:04.435938  184902 default_sa.go:55] duration metric: took 2.716087ms for default service account to be created ...
	I1126 20:43:04.435948  184902 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:43:04.438679  184902 system_pods.go:86] 7 kube-system pods found
	I1126 20:43:04.438716  184902 system_pods.go:89] "coredns-66bc5c9577-f8dk5" [1e650291-05a3-45a5-9886-938e718690d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:43:04.438757  184902 system_pods.go:89] "etcd-pause-166757" [1d89bc54-cd9f-4b6d-a8dd-859d96a1c436] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:43:04.438765  184902 system_pods.go:89] "kindnet-bdwwv" [f354cff5-9bb8-4013-9902-e4e72447beca] Running
	I1126 20:43:04.438775  184902 system_pods.go:89] "kube-apiserver-pause-166757" [e5703723-3967-4fb2-a8fd-83cdf9aeef3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:43:04.438786  184902 system_pods.go:89] "kube-controller-manager-pause-166757" [59567e7b-f221-4488-98dd-02435a3fd7e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:43:04.438797  184902 system_pods.go:89] "kube-proxy-tlg46" [0c1d444f-b32a-44c7-a1eb-ed3e962ba28f] Running
	I1126 20:43:04.438804  184902 system_pods.go:89] "kube-scheduler-pause-166757" [8f0ec421-1e52-447d-8235-08a1f90674a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:43:04.438817  184902 system_pods.go:126] duration metric: took 2.864053ms to wait for k8s-apps to be running ...
	I1126 20:43:04.438826  184902 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:43:04.438879  184902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:43:04.450936  184902 system_svc.go:56] duration metric: took 12.101167ms WaitForService to wait for kubelet
	I1126 20:43:04.450965  184902 kubeadm.go:587] duration metric: took 7.164529784s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:43:04.450985  184902 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:43:04.453993  184902 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:43:04.454026  184902 node_conditions.go:123] node cpu capacity is 2
	I1126 20:43:04.454038  184902 node_conditions.go:105] duration metric: took 3.026443ms to run NodePressure ...
	I1126 20:43:04.454051  184902 start.go:242] waiting for startup goroutines ...
	I1126 20:43:04.454058  184902 start.go:247] waiting for cluster config update ...
	I1126 20:43:04.454067  184902 start.go:256] writing updated cluster config ...
	I1126 20:43:04.454405  184902 ssh_runner.go:195] Run: rm -f paused
	I1126 20:43:04.457661  184902 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:43:04.458309  184902 kapi.go:59] client config for pause-166757: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:43:04.461283  184902 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f8dk5" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:43:06.466904  184902 pod_ready.go:104] pod "coredns-66bc5c9577-f8dk5" is not "Ready", error: <nil>
	I1126 20:43:08.550605  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:43:08.551025  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:43:08.551078  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:43:08.551141  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:43:08.590294  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:08.590313  174302 cri.go:89] found id: ""
	I1126 20:43:08.590320  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:43:08.590376  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:08.594009  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:43:08.594086  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:43:08.636695  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:08.636716  174302 cri.go:89] found id: ""
	I1126 20:43:08.636725  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:43:08.636791  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:08.641394  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:43:08.641461  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:43:08.679491  174302 cri.go:89] found id: ""
	I1126 20:43:08.679514  174302 logs.go:282] 0 containers: []
	W1126 20:43:08.679523  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:43:08.679529  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:43:08.679585  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:43:08.717328  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:08.717347  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:08.717352  174302 cri.go:89] found id: ""
	I1126 20:43:08.717358  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:43:08.717423  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:08.720885  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:08.724684  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:43:08.724751  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:43:08.760902  174302 cri.go:89] found id: ""
	I1126 20:43:08.760975  174302 logs.go:282] 0 containers: []
	W1126 20:43:08.760998  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:43:08.761017  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:43:08.761104  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:43:08.798425  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:08.798445  174302 cri.go:89] found id: ""
	I1126 20:43:08.798454  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:43:08.798512  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:08.801816  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:43:08.801878  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:43:08.837958  174302 cri.go:89] found id: ""
	I1126 20:43:08.837984  174302 logs.go:282] 0 containers: []
	W1126 20:43:08.837999  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:43:08.838006  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:43:08.838064  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:43:08.873506  174302 cri.go:89] found id: ""
	I1126 20:43:08.873533  174302 logs.go:282] 0 containers: []
	W1126 20:43:08.873542  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:43:08.873556  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:43:08.873567  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:08.919435  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:43:08.919462  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:08.957083  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:43:08.957112  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:08.998769  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:43:08.998797  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:43:09.124234  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:43:09.124306  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:43:09.202267  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:43:09.202296  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:43:09.202309  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:09.252073  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:43:09.252104  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:09.352990  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:43:09.353025  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:43:09.423712  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:43:09.423747  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:09.471893  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:43:09.471920  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:43:11.987497  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:43:11.987946  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:43:11.988018  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:43:11.988088  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:43:12.037806  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:12.037831  174302 cri.go:89] found id: ""
	I1126 20:43:12.037838  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:43:12.037906  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:12.042427  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:43:12.042506  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:43:12.086616  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:12.086681  174302 cri.go:89] found id: ""
	I1126 20:43:12.086706  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:43:12.086788  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:12.090393  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:43:12.090511  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:43:12.127765  174302 cri.go:89] found id: ""
	I1126 20:43:12.127831  174302 logs.go:282] 0 containers: []
	W1126 20:43:12.127855  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:43:12.127873  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:43:12.127954  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:43:12.166923  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:12.166986  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:12.166998  174302 cri.go:89] found id: ""
	I1126 20:43:12.167013  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:43:12.167078  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:12.170549  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:12.173838  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:43:12.173916  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:43:12.211738  174302 cri.go:89] found id: ""
	I1126 20:43:12.211764  174302 logs.go:282] 0 containers: []
	W1126 20:43:12.211785  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:43:12.211792  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:43:12.211858  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:43:12.250173  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:12.250193  174302 cri.go:89] found id: ""
	I1126 20:43:12.250200  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:43:12.250254  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:12.253878  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:43:12.254002  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:43:12.289215  174302 cri.go:89] found id: ""
	I1126 20:43:12.289236  174302 logs.go:282] 0 containers: []
	W1126 20:43:12.289244  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:43:12.289251  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:43:12.289307  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:43:12.324267  174302 cri.go:89] found id: ""
	I1126 20:43:12.324290  174302 logs.go:282] 0 containers: []
	W1126 20:43:12.324298  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:43:12.324313  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:43:12.324326  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:43:12.447183  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:43:12.447218  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:43:12.462865  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:43:12.462896  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:43:12.539169  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:43:12.539235  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:43:12.539263  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:12.582542  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:43:12.582573  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:12.693482  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:43:12.693515  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:12.733460  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:43:12.733529  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:43:12.807128  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:43:12.807163  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	W1126 20:43:08.467337  184902 pod_ready.go:104] pod "coredns-66bc5c9577-f8dk5" is not "Ready", error: <nil>
	W1126 20:43:10.966812  184902 pod_ready.go:104] pod "coredns-66bc5c9577-f8dk5" is not "Ready", error: <nil>
	W1126 20:43:12.967534  184902 pod_ready.go:104] pod "coredns-66bc5c9577-f8dk5" is not "Ready", error: <nil>
	I1126 20:43:12.849793  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:43:12.849823  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:12.891691  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:43:12.891724  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:15.443160  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:43:15.443645  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:43:15.443699  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:43:15.443762  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:43:15.486735  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:15.486754  174302 cri.go:89] found id: ""
	I1126 20:43:15.486762  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:43:15.486816  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:15.490632  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:43:15.490699  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:43:15.533870  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:15.533978  174302 cri.go:89] found id: ""
	I1126 20:43:15.534003  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:43:15.534080  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:15.537762  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:43:15.537891  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:43:15.574630  174302 cri.go:89] found id: ""
	I1126 20:43:15.574704  174302 logs.go:282] 0 containers: []
	W1126 20:43:15.574733  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:43:15.574747  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:43:15.574810  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:43:15.611230  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:15.611255  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:15.611260  174302 cri.go:89] found id: ""
	I1126 20:43:15.611275  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:43:15.611331  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:15.616229  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:15.620649  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:43:15.620720  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:43:15.655879  174302 cri.go:89] found id: ""
	I1126 20:43:15.655955  174302 logs.go:282] 0 containers: []
	W1126 20:43:15.655970  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:43:15.655978  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:43:15.656038  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:43:15.692745  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:15.692767  174302 cri.go:89] found id: ""
	I1126 20:43:15.692775  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:43:15.692830  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:15.696280  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:43:15.696346  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:43:15.735490  174302 cri.go:89] found id: ""
	I1126 20:43:15.735511  174302 logs.go:282] 0 containers: []
	W1126 20:43:15.735520  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:43:15.735526  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:43:15.735586  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:43:15.772361  174302 cri.go:89] found id: ""
	I1126 20:43:15.772385  174302 logs.go:282] 0 containers: []
	W1126 20:43:15.772394  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:43:15.772415  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:43:15.772427  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:43:15.908295  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:43:15.908395  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:43:15.931507  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:43:15.931606  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:15.979016  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:43:15.979044  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:16.020683  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:43:16.020716  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:43:16.092286  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:43:16.092324  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:16.132712  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:43:16.132753  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:43:16.218133  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:43:16.218152  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:43:16.218165  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:16.261621  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:43:16.261654  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:16.313384  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:43:16.313422  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	W1126 20:43:15.468249  184902 pod_ready.go:104] pod "coredns-66bc5c9577-f8dk5" is not "Ready", error: <nil>
	I1126 20:43:17.966489  184902 pod_ready.go:94] pod "coredns-66bc5c9577-f8dk5" is "Ready"
	I1126 20:43:17.966516  184902 pod_ready.go:86] duration metric: took 13.505205833s for pod "coredns-66bc5c9577-f8dk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:17.968859  184902 pod_ready.go:83] waiting for pod "etcd-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:17.973158  184902 pod_ready.go:94] pod "etcd-pause-166757" is "Ready"
	I1126 20:43:17.973183  184902 pod_ready.go:86] duration metric: took 4.297951ms for pod "etcd-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:17.975461  184902 pod_ready.go:83] waiting for pod "kube-apiserver-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:17.979758  184902 pod_ready.go:94] pod "kube-apiserver-pause-166757" is "Ready"
	I1126 20:43:17.979784  184902 pod_ready.go:86] duration metric: took 4.302061ms for pod "kube-apiserver-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:17.982124  184902 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:18.164898  184902 pod_ready.go:94] pod "kube-controller-manager-pause-166757" is "Ready"
	I1126 20:43:18.164922  184902 pod_ready.go:86] duration metric: took 182.776758ms for pod "kube-controller-manager-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:18.365010  184902 pod_ready.go:83] waiting for pod "kube-proxy-tlg46" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:18.765072  184902 pod_ready.go:94] pod "kube-proxy-tlg46" is "Ready"
	I1126 20:43:18.765099  184902 pod_ready.go:86] duration metric: took 400.064696ms for pod "kube-proxy-tlg46" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:18.969626  184902 pod_ready.go:83] waiting for pod "kube-scheduler-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:19.365259  184902 pod_ready.go:94] pod "kube-scheduler-pause-166757" is "Ready"
	I1126 20:43:19.365282  184902 pod_ready.go:86] duration metric: took 395.635183ms for pod "kube-scheduler-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:19.365294  184902 pod_ready.go:40] duration metric: took 14.907577048s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:43:19.440494  184902 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 20:43:19.445822  184902 out.go:179] * Done! kubectl is now configured to use "pause-166757" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 20:42:58 pause-166757 crio[2213]: time="2025-11-26T20:42:58.00403624Z" level=info msg="Removed container 6f73d60362531c85177302c22f2f1558a8f9f96309baa3cca8ee2a994661c583: kube-system/coredns-66bc5c9577-f8dk5/coredns" id=9781c7a0-d6aa-4dcd-b511-8d7434556224 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.275965566Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.279230408Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.279263785Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.279290672Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.282702142Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.282731326Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.282746349Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.285578363Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.285610017Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.285660181Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.288377073Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.28840981Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.288431479Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.291279631Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.291308717Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.006154439Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=e40179f6-b012-4fb0-a6f1-23ef4217eb18 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.007500275Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=b6ea76de-7df6-450d-aa6f-5c06c827d91b name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.011088764Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-f8dk5/coredns" id=7af11dde-15f2-4248-90e8-e58d8748a8a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.011313871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.024868828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.025527363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.044990629Z" level=info msg="Created container ff6913ff92f7a33d5f79b7e72cde6b3145439ac3dd25b28de6bda5ca2d449f5d: kube-system/coredns-66bc5c9577-f8dk5/coredns" id=7af11dde-15f2-4248-90e8-e58d8748a8a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.046091748Z" level=info msg="Starting container: ff6913ff92f7a33d5f79b7e72cde6b3145439ac3dd25b28de6bda5ca2d449f5d" id=002fd396-d5c6-4677-b3ca-c3458feb877c name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.047734072Z" level=info msg="Started container" PID=2774 containerID=ff6913ff92f7a33d5f79b7e72cde6b3145439ac3dd25b28de6bda5ca2d449f5d description=kube-system/coredns-66bc5c9577-f8dk5/coredns id=002fd396-d5c6-4677-b3ca-c3458feb877c name=/runtime.v1.RuntimeService/StartContainer sandboxID=fae3cfb4df00460a4e54af77063c6a86c8856706f9296ed1b30e0b125df0932b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ff6913ff92f7a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   6 seconds ago       Running             coredns                   2                   fae3cfb4df004       coredns-66bc5c9577-f8dk5               kube-system
	ff0a5f1227925       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   25 seconds ago      Running             kindnet-cni               2                   3e7f0cdb76091       kindnet-bdwwv                          kube-system
	bf90263bd4f1c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   25 seconds ago      Running             kube-proxy                2                   c266ae892eda7       kube-proxy-tlg46                       kube-system
	eac939c08bc98       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   25 seconds ago      Running             etcd                      2                   620aab564233c       etcd-pause-166757                      kube-system
	8280393973d71       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   25 seconds ago      Running             kube-apiserver            2                   33a7b14ffdf2c       kube-apiserver-pause-166757            kube-system
	4f7996a732bd7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   25 seconds ago      Running             kube-controller-manager   2                   67ae632467a48       kube-controller-manager-pause-166757   kube-system
	091ca865eebb2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   25 seconds ago      Running             kube-scheduler            2                   0f00b3b379c4d       kube-scheduler-pause-166757            kube-system
	2db020b8c32b5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   37 seconds ago      Exited              kube-apiserver            1                   33a7b14ffdf2c       kube-apiserver-pause-166757            kube-system
	0db000c6d2320       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   37 seconds ago      Exited              kube-scheduler            1                   0f00b3b379c4d       kube-scheduler-pause-166757            kube-system
	60b0ffbf35dd0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago      Exited              coredns                   1                   fae3cfb4df004       coredns-66bc5c9577-f8dk5               kube-system
	d3ad91d7746bb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   37 seconds ago      Exited              kindnet-cni               1                   3e7f0cdb76091       kindnet-bdwwv                          kube-system
	4dee54f7f5168       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   37 seconds ago      Exited              kube-proxy                1                   c266ae892eda7       kube-proxy-tlg46                       kube-system
	a84e4d20f1907       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   37 seconds ago      Exited              kube-controller-manager   1                   67ae632467a48       kube-controller-manager-pause-166757   kube-system
	6dffcf8b99674       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   37 seconds ago      Exited              etcd                      1                   620aab564233c       etcd-pause-166757                      kube-system
	
	
	==> coredns [60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49685 - 64690 "HINFO IN 1303599683672573835.5790048047726244057. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004630423s
	
	
	==> coredns [ff6913ff92f7a33d5f79b7e72cde6b3145439ac3dd25b28de6bda5ca2d449f5d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41572 - 9942 "HINFO IN 1681300439164332242.8011566670377224255. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012742238s
	
	
	==> describe nodes <==
	Name:               pause-166757
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-166757
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=pause-166757
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_41_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:41:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-166757
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:43:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:43:18 +0000   Wed, 26 Nov 2025 20:41:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:43:18 +0000   Wed, 26 Nov 2025 20:41:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:43:18 +0000   Wed, 26 Nov 2025 20:41:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:43:18 +0000   Wed, 26 Nov 2025 20:42:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-166757
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                d10a9b8f-65c2-47ef-a8f7-afd4c450fae8
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-f8dk5                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     90s
	  kube-system                 etcd-pause-166757                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         97s
	  kube-system                 kindnet-bdwwv                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      91s
	  kube-system                 kube-apiserver-pause-166757             250m (12%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-pause-166757    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-tlg46                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-pause-166757             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 88s                  kube-proxy       
	  Normal   Starting                 20s                  kube-proxy       
	  Warning  CgroupV1                 104s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  104s (x8 over 104s)  kubelet          Node pause-166757 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    104s (x8 over 104s)  kubelet          Node pause-166757 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     104s (x8 over 104s)  kubelet          Node pause-166757 status is now: NodeHasSufficientPID
	  Normal   Starting                 97s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 97s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  96s                  kubelet          Node pause-166757 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    96s                  kubelet          Node pause-166757 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     96s                  kubelet          Node pause-166757 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           91s                  node-controller  Node pause-166757 event: Registered Node pause-166757 in Controller
	  Normal   NodeReady                48s                  kubelet          Node pause-166757 status is now: NodeReady
	  Warning  ContainerGCFailed        36s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           17s                  node-controller  Node pause-166757 event: Registered Node pause-166757 in Controller
	
	
	==> dmesg <==
	[  +3.105496] overlayfs: idmapped layers are currently not supported
	[ +37.228314] overlayfs: idmapped layers are currently not supported
	[Nov26 20:05] overlayfs: idmapped layers are currently not supported
	[Nov26 20:06] overlayfs: idmapped layers are currently not supported
	[  +3.713866] overlayfs: idmapped layers are currently not supported
	[Nov26 20:14] overlayfs: idmapped layers are currently not supported
	[Nov26 20:16] overlayfs: idmapped layers are currently not supported
	[Nov26 20:21] overlayfs: idmapped layers are currently not supported
	[ +33.563196] overlayfs: idmapped layers are currently not supported
	[Nov26 20:23] overlayfs: idmapped layers are currently not supported
	[Nov26 20:24] overlayfs: idmapped layers are currently not supported
	[Nov26 20:25] overlayfs: idmapped layers are currently not supported
	[Nov26 20:27] overlayfs: idmapped layers are currently not supported
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6dffcf8b996742928728e2c585061644cc362bcb92cdff0791c4434cf0f2073a] <==
	{"level":"info","ts":"2025-11-26T20:42:45.871728Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-26T20:42:45.871916Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:42:45.872123Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-26T20:42:45.872166Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-11-26T20:42:45.872946Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-11-26T20:42:45.873069Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-26T20:42:45.899842Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-26T20:42:46.742705Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-26T20:42:46.742747Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-166757","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-26T20:42:46.742888Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-26T20:42:46.744098Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-26T20:42:46.746171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T20:42:46.746622Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-11-26T20:42:46.747022Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-26T20:42:46.747136Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-26T20:42:46.747951Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T20:42:46.747993Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-26T20:42:46.748034Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-26T20:42:46.748160Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-26T20:42:46.748196Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-26T20:42:46.748207Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T20:42:46.758820Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-26T20:42:46.758972Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T20:42:46.766140Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-26T20:42:46.766176Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-166757","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [eac939c08bc98665f4bf51748fc29d22412f9ee4271d7560afcbe9d5813486ae] <==
	{"level":"warn","ts":"2025-11-26T20:43:00.979727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.006110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.043979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.066035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.103407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.124080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.155362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.189427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.232399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.287745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.313552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.378925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.422100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.465572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.512121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.542004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.569049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.596850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.622164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.672596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.701945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.748810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.817065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.846156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:02.016051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56016","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:43:23 up  1:25,  0 user,  load average: 1.40, 1.96, 1.84
	Linux pause-166757 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d3ad91d7746bb4b386071782c6f36969bb925be7fbcfcd4d33a447d23efb7975] <==
	I1126 20:42:45.724437       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:42:45.724672       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:42:45.724843       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:42:45.724960       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:42:45.725001       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:42:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:42:45.869574       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:42:45.869656       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:42:45.869695       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:42:45.870458       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:42:45.926232       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:42:45.926435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 20:42:45.926576       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:42:45.926740       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	
	
	==> kindnet [ff0a5f1227925b4bdb72055f1ac096149718cb675cab7d6d694aa06631f5ccea] <==
	I1126 20:42:58.044849       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:42:58.057393       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:42:58.057515       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:42:58.057528       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:42:58.057543       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:42:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:42:58.273360       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:42:58.282050       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:42:58.282127       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:42:58.282268       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:43:03.083017       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:43:03.083144       1 metrics.go:72] Registering metrics
	I1126 20:43:03.083251       1 controller.go:711] "Syncing nftables rules"
	I1126 20:43:08.275614       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:43:08.275681       1 main.go:301] handling current node
	I1126 20:43:18.272593       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:43:18.272628       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2db020b8c32b522251976eced59d8bb3bac5adab09d141a0bf566661e506974c] <==
	I1126 20:42:45.918878       1 options.go:263] external host was not specified, using 192.168.85.2
	I1126 20:42:45.928796       1 server.go:150] Version: v1.34.1
	I1126 20:42:45.928925       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [8280393973d719432323cdf237acb2bda01b8dce41b8dffb5bd87ebc5d1dd828] <==
	I1126 20:43:03.001583       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1126 20:43:03.001703       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1126 20:43:03.001736       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1126 20:43:03.001793       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:43:03.010179       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1126 20:43:03.010335       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:43:03.019884       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1126 20:43:03.020198       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:43:03.020348       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1126 20:43:03.023833       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1126 20:43:03.024461       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1126 20:43:03.024674       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:43:03.024740       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:43:03.024773       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:43:03.024808       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:43:03.028715       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1126 20:43:03.028759       1 policy_source.go:240] refreshing policies
	I1126 20:43:03.031298       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1126 20:43:03.039746       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:43:03.711497       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:43:04.896811       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:43:06.386693       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:43:06.484922       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:43:06.536539       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:43:06.636734       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [4f7996a732bd73b5f908a785886db88ef6214a2067d6c11b1d4e1292f31b6556] <==
	I1126 20:43:06.241611       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 20:43:06.241702       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:43:06.242989       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:43:06.243076       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:43:06.243147       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-166757"
	I1126 20:43:06.243191       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1126 20:43:06.243295       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:43:06.245293       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:43:06.249164       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1126 20:43:06.250777       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:43:06.271114       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1126 20:43:06.273412       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:43:06.278329       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1126 20:43:06.278335       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:43:06.278355       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:43:06.278367       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:43:06.280534       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 20:43:06.281842       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 20:43:06.281914       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:43:06.283689       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:43:06.285784       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:43:06.288190       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:43:06.294509       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:43:06.294535       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:43:06.294544       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [a84e4d20f1907030703fc54a2a88bc2779dec332e6e8415d049b55a34abd0119] <==
	I1126 20:42:46.690158       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-proxy [4dee54f7f5168459562bdac0a84ab912b1e6d20efea644ea468f645384533723] <==
	I1126 20:42:46.515843       1 server_linux.go:53] "Using iptables proxy"
	
	
	==> kube-proxy [bf90263bd4f1cf3ae79640f3420e3512ddac538a4089f3d2dd281242570b18dc] <==
	I1126 20:42:58.651226       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:42:59.651289       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:43:03.051926       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:43:03.052029       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1126 20:43:03.052153       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:43:03.081316       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:43:03.081452       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:43:03.093544       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:43:03.094034       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:43:03.094103       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:43:03.103366       1 config.go:200] "Starting service config controller"
	I1126 20:43:03.103439       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:43:03.103483       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:43:03.103509       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:43:03.103547       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:43:03.103573       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:43:03.113766       1 config.go:309] "Starting node config controller"
	I1126 20:43:03.113851       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:43:03.114944       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:43:03.205532       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:43:03.205538       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:43:03.205567       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [091ca865eebb280db3b387e326ef44d9b1d136413786c299225e04fa0f4673c1] <==
	I1126 20:43:00.001057       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:43:02.874425       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:43:02.874468       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:43:02.874478       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:43:02.874486       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:43:02.995843       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:43:02.995962       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:43:03.003411       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:43:03.003766       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:43:03.003832       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:43:03.003893       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:43:03.104847       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [0db000c6d2320c82ec9be70d6c38cf52db881b458ac9fcbb65a9de481d9005fd] <==
	
	
	==> kubelet <==
	Nov 26 20:42:57 pause-166757 kubelet[1316]: E1126 20:42:57.706419    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-166757\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a8a9a2580b16520cc16b60787efc26f3" pod="kube-system/etcd-pause-166757"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: E1126 20:42:57.706724    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-166757\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5d0e9f4903b23930a563c698eb6239b4" pod="kube-system/kube-controller-manager-pause-166757"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: E1126 20:42:57.707020    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlg46\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0c1d444f-b32a-44c7-a1eb-ed3e962ba28f" pod="kube-system/kube-proxy-tlg46"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: E1126 20:42:57.707386    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-bdwwv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f354cff5-9bb8-4013-9902-e4e72447beca" pod="kube-system/kindnet-bdwwv"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: I1126 20:42:57.722249    1316 scope.go:117] "RemoveContainer" containerID="5358710efec2a46ce31c272e0d7f8949694cd7300a389f2e5ef3016fa8458d3b"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: I1126 20:42:57.799591    1316 scope.go:117] "RemoveContainer" containerID="db11bad774b4a4bfedcd139e4ff4e88d55fb014c71e7cc7cc2dd585051987b3a"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: I1126 20:42:57.899965    1316 scope.go:117] "RemoveContainer" containerID="97381f7b321c19f78df8e35bcd215fb879395945793d05255aa19eedfec476e0"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: I1126 20:42:57.932521    1316 scope.go:117] "RemoveContainer" containerID="c11d4d76b5030322394f2928ebbca2cdde33bb90f61362d7dee70fa18b14711d"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: I1126 20:42:57.976061    1316 scope.go:117] "RemoveContainer" containerID="6f73d60362531c85177302c22f2f1558a8f9f96309baa3cca8ee2a994661c583"
	Nov 26 20:42:58 pause-166757 kubelet[1316]: I1126 20:42:58.726904    1316 scope.go:117] "RemoveContainer" containerID="60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95"
	Nov 26 20:42:58 pause-166757 kubelet[1316]: E1126 20:42:58.727501    1316 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-f8dk5_kube-system(1e650291-05a3-45a5-9886-938e718690d8)\"" pod="kube-system/coredns-66bc5c9577-f8dk5" podUID="1e650291-05a3-45a5-9886-938e718690d8"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.766613    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-166757\" is forbidden: User \"system:node:pause-166757\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" podUID="4822e8c8ac682bfa93918aca1b60b9ce" pod="kube-system/kube-scheduler-pause-166757"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.768088    1316 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-166757\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.768227    1316 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-166757\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.768300    1316 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-166757\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.820868    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-166757\" is forbidden: User \"system:node:pause-166757\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" podUID="bc338ec41690a6900749846a15a3aec1" pod="kube-system/kube-apiserver-pause-166757"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.868528    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-166757\" is forbidden: User \"system:node:pause-166757\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" podUID="a8a9a2580b16520cc16b60787efc26f3" pod="kube-system/etcd-pause-166757"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.938431    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-166757\" is forbidden: User \"system:node:pause-166757\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" podUID="5d0e9f4903b23930a563c698eb6239b4" pod="kube-system/kube-controller-manager-pause-166757"
	Nov 26 20:43:05 pause-166757 kubelet[1316]: I1126 20:43:05.473605    1316 scope.go:117] "RemoveContainer" containerID="60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95"
	Nov 26 20:43:05 pause-166757 kubelet[1316]: E1126 20:43:05.474264    1316 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-f8dk5_kube-system(1e650291-05a3-45a5-9886-938e718690d8)\"" pod="kube-system/coredns-66bc5c9577-f8dk5" podUID="1e650291-05a3-45a5-9886-938e718690d8"
	Nov 26 20:43:07 pause-166757 kubelet[1316]: W1126 20:43:07.271555    1316 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 26 20:43:17 pause-166757 kubelet[1316]: I1126 20:43:17.005123    1316 scope.go:117] "RemoveContainer" containerID="60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95"
	Nov 26 20:43:20 pause-166757 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:43:20 pause-166757 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:43:20 pause-166757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-166757 -n pause-166757
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-166757 -n pause-166757: exit status 2 (368.990697ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-166757 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-166757
helpers_test.go:243: (dbg) docker inspect pause-166757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915",
	        "Created": "2025-11-26T20:41:18.568444955Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 180793,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:41:18.657791911Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915/hostname",
	        "HostsPath": "/var/lib/docker/containers/4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915/hosts",
	        "LogPath": "/var/lib/docker/containers/4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915/4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915-json.log",
	        "Name": "/pause-166757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-166757:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-166757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4222ca30230947e86179ae211d1fb7950dd4e9be60108b3156d8d62ab442c915",
	                "LowerDir": "/var/lib/docker/overlay2/f110d1456460e947f0b7dfca99a7906cd4b868a8fa0c5c915d04992a95b693ef-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f110d1456460e947f0b7dfca99a7906cd4b868a8fa0c5c915d04992a95b693ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f110d1456460e947f0b7dfca99a7906cd4b868a8fa0c5c915d04992a95b693ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f110d1456460e947f0b7dfca99a7906cd4b868a8fa0c5c915d04992a95b693ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-166757",
	                "Source": "/var/lib/docker/volumes/pause-166757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-166757",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-166757",
	                "name.minikube.sigs.k8s.io": "pause-166757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79a9b77d69dfcedc057c7c235cc0b8d197aa55e9ae352c7c76dd0ef3e3a863fd",
	            "SandboxKey": "/var/run/docker/netns/79a9b77d69df",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33018"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33019"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33022"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33020"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33021"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-166757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:0e:1c:51:3c:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7f17286df33a471f9116019cb9202d2a12695a60509df55724323f546dd77948",
	                    "EndpointID": "6a9d59863d3c25a2b2a5cf2c55a80d64d939666026578e279141e11581cee7f1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-166757",
	                        "4222ca302309"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-166757 -n pause-166757
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-166757 -n pause-166757: exit status 2 (334.706202ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-166757 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-166757 logs -n 25: (1.376931912s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-784576 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:34 UTC │ 26 Nov 25 20:34 UTC │
	│ delete  │ -p NoKubernetes-784576                                                                                                                   │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:34 UTC │ 26 Nov 25 20:34 UTC │
	│ start   │ -p NoKubernetes-784576 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:34 UTC │ 26 Nov 25 20:35 UTC │
	│ start   │ -p missing-upgrade-701119 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-701119    │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:35 UTC │
	│ ssh     │ -p NoKubernetes-784576 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │                     │
	│ stop    │ -p NoKubernetes-784576                                                                                                                   │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:35 UTC │
	│ start   │ -p NoKubernetes-784576 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:35 UTC │
	│ ssh     │ -p NoKubernetes-784576 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │                     │
	│ delete  │ -p NoKubernetes-784576                                                                                                                   │ NoKubernetes-784576       │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:35 UTC │
	│ start   │ -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-007998 │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:36 UTC │
	│ delete  │ -p missing-upgrade-701119                                                                                                                │ missing-upgrade-701119    │ jenkins │ v1.37.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:35 UTC │
	│ start   │ -p stopped-upgrade-569097 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-569097    │ jenkins │ v1.35.0 │ 26 Nov 25 20:35 UTC │ 26 Nov 25 20:36 UTC │
	│ stop    │ -p kubernetes-upgrade-007998                                                                                                             │ kubernetes-upgrade-007998 │ jenkins │ v1.37.0 │ 26 Nov 25 20:36 UTC │ 26 Nov 25 20:36 UTC │
	│ start   │ -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-007998 │ jenkins │ v1.37.0 │ 26 Nov 25 20:36 UTC │ 26 Nov 25 20:38 UTC │
	│ stop    │ stopped-upgrade-569097 stop                                                                                                              │ stopped-upgrade-569097    │ jenkins │ v1.35.0 │ 26 Nov 25 20:36 UTC │ 26 Nov 25 20:36 UTC │
	│ start   │ -p stopped-upgrade-569097 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-569097    │ jenkins │ v1.37.0 │ 26 Nov 25 20:36 UTC │ 26 Nov 25 20:41 UTC │
	│ start   │ -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-007998 │ jenkins │ v1.37.0 │ 26 Nov 25 20:38 UTC │                     │
	│ start   │ -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-007998 │ jenkins │ v1.37.0 │ 26 Nov 25 20:38 UTC │ 26 Nov 25 20:38 UTC │
	│ delete  │ -p kubernetes-upgrade-007998                                                                                                             │ kubernetes-upgrade-007998 │ jenkins │ v1.37.0 │ 26 Nov 25 20:38 UTC │ 26 Nov 25 20:38 UTC │
	│ start   │ -p running-upgrade-215687 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-215687    │ jenkins │ v1.35.0 │ 26 Nov 25 20:38 UTC │ 26 Nov 25 20:39 UTC │
	│ start   │ -p running-upgrade-215687 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-215687    │ jenkins │ v1.37.0 │ 26 Nov 25 20:39 UTC │                     │
	│ delete  │ -p stopped-upgrade-569097                                                                                                                │ stopped-upgrade-569097    │ jenkins │ v1.37.0 │ 26 Nov 25 20:41 UTC │ 26 Nov 25 20:41 UTC │
	│ start   │ -p pause-166757 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-166757              │ jenkins │ v1.37.0 │ 26 Nov 25 20:41 UTC │ 26 Nov 25 20:42 UTC │
	│ start   │ -p pause-166757 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-166757              │ jenkins │ v1.37.0 │ 26 Nov 25 20:42 UTC │ 26 Nov 25 20:43 UTC │
	│ pause   │ -p pause-166757 --alsologtostderr -v=5                                                                                                   │ pause-166757              │ jenkins │ v1.37.0 │ 26 Nov 25 20:43 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:42:38
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:42:38.014542  184902 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:42:38.014702  184902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:42:38.014712  184902 out.go:374] Setting ErrFile to fd 2...
	I1126 20:42:38.014718  184902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:42:38.015014  184902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:42:38.015497  184902 out.go:368] Setting JSON to false
	I1126 20:42:38.016618  184902 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5088,"bootTime":1764184670,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:42:38.016709  184902 start.go:143] virtualization:  
	I1126 20:42:38.022093  184902 out.go:179] * [pause-166757] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:42:38.025567  184902 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:42:38.025703  184902 notify.go:221] Checking for updates...
	I1126 20:42:38.037245  184902 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:42:38.040789  184902 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:42:38.043854  184902 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:42:38.046919  184902 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:42:38.049856  184902 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:42:38.053366  184902 config.go:182] Loaded profile config "pause-166757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:42:38.054113  184902 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:42:38.092870  184902 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:42:38.093004  184902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:42:38.170380  184902 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-26 20:42:38.160587652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:42:38.170490  184902 docker.go:319] overlay module found
	I1126 20:42:38.173611  184902 out.go:179] * Using the docker driver based on existing profile
	I1126 20:42:38.176629  184902 start.go:309] selected driver: docker
	I1126 20:42:38.176647  184902 start.go:927] validating driver "docker" against &{Name:pause-166757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-166757 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:42:38.176817  184902 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:42:38.176924  184902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:42:38.231003  184902 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-26 20:42:38.22115119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:42:38.231409  184902 cni.go:84] Creating CNI manager for ""
	I1126 20:42:38.231477  184902 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:42:38.231522  184902 start.go:353] cluster config:
	{Name:pause-166757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-166757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:42:38.236496  184902 out.go:179] * Starting "pause-166757" primary control-plane node in "pause-166757" cluster
	I1126 20:42:38.239424  184902 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:42:38.242295  184902 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:42:38.245135  184902 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:42:38.245150  184902 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:42:38.245187  184902 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:42:38.245196  184902 cache.go:65] Caching tarball of preloaded images
	I1126 20:42:38.245262  184902 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:42:38.245271  184902 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:42:38.245408  184902 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/config.json ...
	I1126 20:42:38.267624  184902 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:42:38.267649  184902 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:42:38.267667  184902 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:42:38.267696  184902 start.go:360] acquireMachinesLock for pause-166757: {Name:mk5f9cf6d34bb8aea4563d0f7759f0f2253ef309 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:42:38.267763  184902 start.go:364] duration metric: took 41.918µs to acquireMachinesLock for "pause-166757"
	I1126 20:42:38.267789  184902 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:42:38.267797  184902 fix.go:54] fixHost starting: 
	I1126 20:42:38.268052  184902 cli_runner.go:164] Run: docker container inspect pause-166757 --format={{.State.Status}}
	I1126 20:42:38.284471  184902 fix.go:112] recreateIfNeeded on pause-166757: state=Running err=<nil>
	W1126 20:42:38.284505  184902 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:42:39.940371  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:42:39.940840  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:42:39.940884  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:42:39.940939  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:42:39.976125  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:39.976149  174302 cri.go:89] found id: ""
	I1126 20:42:39.976157  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:42:39.976212  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:39.979603  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:42:39.979673  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:42:40.043227  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:40.043247  174302 cri.go:89] found id: ""
	I1126 20:42:40.043255  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:42:40.043327  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:40.059506  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:42:40.059679  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:42:40.124195  174302 cri.go:89] found id: ""
	I1126 20:42:40.124219  174302 logs.go:282] 0 containers: []
	W1126 20:42:40.124228  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:42:40.124235  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:42:40.124300  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:42:40.167237  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:40.167315  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:40.167389  174302 cri.go:89] found id: ""
	I1126 20:42:40.167416  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:42:40.167510  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:40.171741  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:40.175767  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:42:40.175869  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:42:40.216219  174302 cri.go:89] found id: ""
	I1126 20:42:40.216254  174302 logs.go:282] 0 containers: []
	W1126 20:42:40.216263  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:42:40.216270  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:42:40.216354  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:42:40.254543  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:40.254570  174302 cri.go:89] found id: ""
	I1126 20:42:40.254578  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:42:40.254635  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:40.258921  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:42:40.259023  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:42:40.295677  174302 cri.go:89] found id: ""
	I1126 20:42:40.295701  174302 logs.go:282] 0 containers: []
	W1126 20:42:40.295712  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:42:40.295719  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:42:40.295781  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:42:40.333228  174302 cri.go:89] found id: ""
	I1126 20:42:40.333253  174302 logs.go:282] 0 containers: []
	W1126 20:42:40.333261  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:42:40.333276  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:42:40.333288  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:42:40.349299  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:42:40.349330  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:42:40.421611  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:42:40.421632  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:42:40.421645  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:40.471550  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:42:40.471581  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:40.561329  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:42:40.561362  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:40.597842  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:42:40.597871  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:42:40.673232  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:42:40.673265  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:42:40.794627  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:42:40.794661  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:40.838930  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:42:40.838961  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:40.876023  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:42:40.876089  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:42:38.287820  184902 out.go:252] * Updating the running docker "pause-166757" container ...
	I1126 20:42:38.287854  184902 machine.go:94] provisionDockerMachine start ...
	I1126 20:42:38.287939  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:38.304915  184902 main.go:143] libmachine: Using SSH client type: native
	I1126 20:42:38.305302  184902 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1126 20:42:38.305319  184902 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:42:38.453616  184902 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-166757
	
	I1126 20:42:38.453642  184902 ubuntu.go:182] provisioning hostname "pause-166757"
	I1126 20:42:38.453701  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:38.472187  184902 main.go:143] libmachine: Using SSH client type: native
	I1126 20:42:38.472504  184902 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1126 20:42:38.472523  184902 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-166757 && echo "pause-166757" | sudo tee /etc/hostname
	I1126 20:42:38.635978  184902 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-166757
	
	I1126 20:42:38.636066  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:38.654894  184902 main.go:143] libmachine: Using SSH client type: native
	I1126 20:42:38.655209  184902 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1126 20:42:38.655238  184902 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-166757' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-166757/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-166757' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:42:38.802201  184902 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:42:38.802251  184902 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:42:38.802282  184902 ubuntu.go:190] setting up certificates
	I1126 20:42:38.802300  184902 provision.go:84] configureAuth start
	I1126 20:42:38.802362  184902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-166757
	I1126 20:42:38.820026  184902 provision.go:143] copyHostCerts
	I1126 20:42:38.820099  184902 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:42:38.820113  184902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:42:38.820188  184902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:42:38.820302  184902 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:42:38.820313  184902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:42:38.820340  184902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:42:38.820446  184902 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:42:38.820457  184902 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:42:38.820486  184902 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:42:38.820551  184902 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.pause-166757 san=[127.0.0.1 192.168.85.2 localhost minikube pause-166757]
	I1126 20:42:38.928735  184902 provision.go:177] copyRemoteCerts
	I1126 20:42:38.928799  184902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:42:38.928841  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:38.946198  184902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/pause-166757/id_rsa Username:docker}
	I1126 20:42:39.049794  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:42:39.068338  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1126 20:42:39.085395  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:42:39.102978  184902 provision.go:87] duration metric: took 300.650312ms to configureAuth
	I1126 20:42:39.103003  184902 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:42:39.103229  184902 config.go:182] Loaded profile config "pause-166757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:42:39.103344  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:39.121407  184902 main.go:143] libmachine: Using SSH client type: native
	I1126 20:42:39.121728  184902 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33018 <nil> <nil>}
	I1126 20:42:39.121745  184902 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:42:43.425698  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:42:43.426210  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:42:43.426268  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:42:43.426329  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:42:43.463525  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:43.463549  174302 cri.go:89] found id: ""
	I1126 20:42:43.463557  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:42:43.463623  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:43.467208  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:42:43.467309  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:42:43.506671  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:43.506694  174302 cri.go:89] found id: ""
	I1126 20:42:43.506705  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:42:43.506799  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:43.510413  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:42:43.510487  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:42:43.547424  174302 cri.go:89] found id: ""
	I1126 20:42:43.547497  174302 logs.go:282] 0 containers: []
	W1126 20:42:43.547521  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:42:43.547540  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:42:43.547628  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:42:43.587975  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:43.587999  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:43.588004  174302 cri.go:89] found id: ""
	I1126 20:42:43.588011  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:42:43.588068  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:43.591892  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:43.595580  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:42:43.595676  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:42:43.637077  174302 cri.go:89] found id: ""
	I1126 20:42:43.637103  174302 logs.go:282] 0 containers: []
	W1126 20:42:43.637112  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:42:43.637118  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:42:43.637175  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:42:43.675581  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:43.675604  174302 cri.go:89] found id: ""
	I1126 20:42:43.675612  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:42:43.675691  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:43.679639  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:42:43.679728  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:42:43.714936  174302 cri.go:89] found id: ""
	I1126 20:42:43.715003  174302 logs.go:282] 0 containers: []
	W1126 20:42:43.715017  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:42:43.715024  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:42:43.715092  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:42:43.752756  174302 cri.go:89] found id: ""
	I1126 20:42:43.752782  174302 logs.go:282] 0 containers: []
	W1126 20:42:43.752791  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:42:43.752807  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:42:43.752819  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:42:43.874035  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:42:43.874073  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:42:43.941525  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:42:43.941592  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:42:43.941629  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:43.984068  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:42:43.984100  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:44.032026  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:42:44.032058  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:44.069961  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:42:44.069990  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:42:44.142413  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:42:44.142447  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:42:44.157944  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:42:44.157970  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:44.246288  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:42:44.246323  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:44.285867  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:42:44.285896  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:42:46.853992  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:42:46.854434  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:42:46.854484  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:42:46.854543  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:42:46.891485  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:46.891507  174302 cri.go:89] found id: ""
	I1126 20:42:46.891515  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:42:46.891569  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:46.894997  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:42:46.895074  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:42:46.934842  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:46.934866  174302 cri.go:89] found id: ""
	I1126 20:42:46.934874  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:42:46.934929  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:46.938487  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:42:46.938564  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:42:46.995352  174302 cri.go:89] found id: ""
	I1126 20:42:46.995377  174302 logs.go:282] 0 containers: []
	W1126 20:42:46.995385  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:42:46.995392  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:42:46.995449  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:42:47.033514  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:47.033536  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:47.033541  174302 cri.go:89] found id: ""
	I1126 20:42:47.033549  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:42:47.033605  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:47.037996  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:47.041289  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:42:47.041367  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:42:47.080358  174302 cri.go:89] found id: ""
	I1126 20:42:47.080382  174302 logs.go:282] 0 containers: []
	W1126 20:42:47.080392  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:42:47.080398  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:42:47.080453  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:42:47.118826  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:47.118847  174302 cri.go:89] found id: ""
	I1126 20:42:47.118855  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:42:47.118908  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:47.122707  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:42:47.122778  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:42:47.167756  174302 cri.go:89] found id: ""
	I1126 20:42:47.167779  174302 logs.go:282] 0 containers: []
	W1126 20:42:47.167787  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:42:47.167794  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:42:47.167850  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:42:47.203349  174302 cri.go:89] found id: ""
	I1126 20:42:47.203371  174302 logs.go:282] 0 containers: []
	W1126 20:42:47.203379  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:42:47.203393  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:42:47.203412  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:42:47.334693  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:42:47.334725  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:42:47.350516  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:42:47.350541  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:47.397393  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:42:47.397420  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:47.440372  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:42:47.440400  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:47.476444  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:42:47.476470  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:47.511538  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:42:47.511563  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:42:47.554136  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:42:47.554166  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:42:47.628857  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:42:47.628879  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:42:47.628892  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:47.715814  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:42:47.715848  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:42:44.500837  184902 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:42:44.500860  184902 machine.go:97] duration metric: took 6.21299805s to provisionDockerMachine
	I1126 20:42:44.500872  184902 start.go:293] postStartSetup for "pause-166757" (driver="docker")
	I1126 20:42:44.500883  184902 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:42:44.500941  184902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:42:44.501002  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:44.518783  184902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/pause-166757/id_rsa Username:docker}
	I1126 20:42:44.625311  184902 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:42:44.628946  184902 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:42:44.628984  184902 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:42:44.628996  184902 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:42:44.629051  184902 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:42:44.629139  184902 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:42:44.629241  184902 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:42:44.636861  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:42:44.654948  184902 start.go:296] duration metric: took 154.060756ms for postStartSetup
	I1126 20:42:44.655028  184902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:42:44.655070  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:44.672310  184902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/pause-166757/id_rsa Username:docker}
	I1126 20:42:44.775202  184902 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:42:44.780264  184902 fix.go:56] duration metric: took 6.512458303s for fixHost
	I1126 20:42:44.780290  184902 start.go:83] releasing machines lock for "pause-166757", held for 6.512511084s
	I1126 20:42:44.780372  184902 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-166757
	I1126 20:42:44.797099  184902 ssh_runner.go:195] Run: cat /version.json
	I1126 20:42:44.797165  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:44.797413  184902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:42:44.797471  184902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-166757
	I1126 20:42:44.822183  184902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/pause-166757/id_rsa Username:docker}
	I1126 20:42:44.826078  184902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33018 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/pause-166757/id_rsa Username:docker}
	I1126 20:42:45.044740  184902 ssh_runner.go:195] Run: systemctl --version
	I1126 20:42:45.054711  184902 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:42:45.124906  184902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:42:45.132403  184902 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:42:45.132494  184902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:42:45.143102  184902 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:42:45.143137  184902 start.go:496] detecting cgroup driver to use...
	I1126 20:42:45.143176  184902 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:42:45.143249  184902 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:42:45.176799  184902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:42:45.224328  184902 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:42:45.224442  184902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:42:45.261348  184902 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:42:45.322071  184902 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:42:45.576340  184902 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:42:45.875313  184902 docker.go:234] disabling docker service ...
	I1126 20:42:45.875389  184902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:42:45.893977  184902 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:42:45.908275  184902 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:42:46.111439  184902 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:42:46.340644  184902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:42:46.357492  184902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:42:46.375743  184902 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:42:46.375833  184902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.384594  184902 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:42:46.384667  184902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.396443  184902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.408363  184902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.420304  184902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:42:46.431920  184902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.453322  184902 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.464678  184902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:42:46.479097  184902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:42:46.487446  184902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:42:46.495542  184902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:42:46.728163  184902 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:42:50.289260  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:42:50.289713  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:42:50.289764  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:42:50.289830  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:42:50.331413  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:50.331435  174302 cri.go:89] found id: ""
	I1126 20:42:50.331444  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:42:50.331502  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:50.335102  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:42:50.335171  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:42:50.371490  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:50.371518  174302 cri.go:89] found id: ""
	I1126 20:42:50.371526  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:42:50.371581  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:50.375177  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:42:50.375297  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:42:50.413778  174302 cri.go:89] found id: ""
	I1126 20:42:50.413805  174302 logs.go:282] 0 containers: []
	W1126 20:42:50.413815  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:42:50.413821  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:42:50.413880  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:42:50.454411  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:50.454435  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:50.454440  174302 cri.go:89] found id: ""
	I1126 20:42:50.454447  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:42:50.454510  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:50.458064  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:50.461559  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:42:50.461651  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:42:50.503213  174302 cri.go:89] found id: ""
	I1126 20:42:50.503249  174302 logs.go:282] 0 containers: []
	W1126 20:42:50.503259  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:42:50.503265  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:42:50.503325  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:42:50.540076  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:50.540097  174302 cri.go:89] found id: ""
	I1126 20:42:50.540106  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:42:50.540161  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:50.543698  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:42:50.543773  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:42:50.579770  174302 cri.go:89] found id: ""
	I1126 20:42:50.579796  174302 logs.go:282] 0 containers: []
	W1126 20:42:50.579805  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:42:50.579812  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:42:50.579868  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:42:50.619971  174302 cri.go:89] found id: ""
	I1126 20:42:50.620004  174302 logs.go:282] 0 containers: []
	W1126 20:42:50.620014  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:42:50.620027  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:42:50.620039  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:42:50.742264  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:42:50.742296  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:42:50.758754  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:42:50.758784  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:42:50.839315  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:42:50.839376  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:42:50.839396  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:50.885299  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:42:50.885330  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:42:50.960239  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:42:50.960272  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:42:50.999930  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:42:50.999957  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:51.043310  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:42:51.043338  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:51.137429  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:42:51.137470  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:51.177553  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:42:51.177585  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:56.029706  184902 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.301506325s)
	I1126 20:42:56.029733  184902 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:42:56.029786  184902 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:42:56.034148  184902 start.go:564] Will wait 60s for crictl version
	I1126 20:42:56.034225  184902 ssh_runner.go:195] Run: which crictl
	I1126 20:42:56.038226  184902 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:42:56.069598  184902 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:42:56.069686  184902 ssh_runner.go:195] Run: crio --version
	I1126 20:42:56.100898  184902 ssh_runner.go:195] Run: crio --version
	I1126 20:42:56.143165  184902 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:42:56.146269  184902 cli_runner.go:164] Run: docker network inspect pause-166757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:42:56.162713  184902 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:42:56.166643  184902 kubeadm.go:884] updating cluster {Name:pause-166757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-166757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:42:56.166788  184902 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:42:56.166853  184902 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:42:56.203751  184902 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:42:56.203779  184902 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:42:56.203837  184902 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:42:56.228422  184902 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:42:56.228449  184902 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:42:56.228456  184902 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1126 20:42:56.228558  184902 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-166757 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-166757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:42:56.228640  184902 ssh_runner.go:195] Run: crio config
	I1126 20:42:56.287060  184902 cni.go:84] Creating CNI manager for ""
	I1126 20:42:56.287081  184902 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:42:56.287105  184902 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:42:56.287132  184902 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-166757 NodeName:pause-166757 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:42:56.287269  184902 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-166757"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:42:56.287341  184902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:42:56.295131  184902 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:42:56.295208  184902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:42:56.302528  184902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1126 20:42:56.315798  184902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:42:56.328809  184902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1126 20:42:56.341587  184902 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:42:56.345243  184902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:42:56.496566  184902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:42:56.509656  184902 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757 for IP: 192.168.85.2
	I1126 20:42:56.509679  184902 certs.go:195] generating shared ca certs ...
	I1126 20:42:56.509694  184902 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:42:56.509860  184902 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:42:56.509969  184902 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:42:56.509990  184902 certs.go:257] generating profile certs ...
	I1126 20:42:56.510099  184902 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/client.key
	I1126 20:42:56.510169  184902 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/apiserver.key.edbe23e7
	I1126 20:42:56.510214  184902 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/proxy-client.key
	I1126 20:42:56.510325  184902 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:42:56.510373  184902 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:42:56.510387  184902 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:42:56.510416  184902 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:42:56.510457  184902 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:42:56.510488  184902 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:42:56.510543  184902 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:42:56.511296  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:42:56.531516  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:42:56.548938  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:42:56.567492  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:42:56.584319  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1126 20:42:56.601943  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:42:56.628933  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:42:56.647930  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:42:56.666458  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:42:56.684916  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:42:56.702718  184902 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:42:56.721215  184902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:42:56.734393  184902 ssh_runner.go:195] Run: openssl version
	I1126 20:42:56.740640  184902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:42:56.749059  184902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:42:56.752658  184902 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:42:56.752748  184902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:42:56.794723  184902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:42:56.802922  184902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:42:56.811190  184902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:42:56.814824  184902 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:42:56.814928  184902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:42:56.855831  184902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:42:56.863628  184902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:42:56.871733  184902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:42:56.875319  184902 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:42:56.875435  184902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:42:56.916262  184902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:42:56.924249  184902 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:42:56.928508  184902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:42:56.971627  184902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:42:57.015286  184902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:42:57.057508  184902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:42:57.098601  184902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:42:57.139597  184902 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:42:57.181480  184902 kubeadm.go:401] StartCluster: {Name:pause-166757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-166757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:42:57.181637  184902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:42:57.181704  184902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:42:57.225890  184902 cri.go:89] found id: "2db020b8c32b522251976eced59d8bb3bac5adab09d141a0bf566661e506974c"
	I1126 20:42:57.225909  184902 cri.go:89] found id: "0db000c6d2320c82ec9be70d6c38cf52db881b458ac9fcbb65a9de481d9005fd"
	I1126 20:42:57.225913  184902 cri.go:89] found id: "60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95"
	I1126 20:42:57.225916  184902 cri.go:89] found id: "d3ad91d7746bb4b386071782c6f36969bb925be7fbcfcd4d33a447d23efb7975"
	I1126 20:42:57.225944  184902 cri.go:89] found id: "4dee54f7f5168459562bdac0a84ab912b1e6d20efea644ea468f645384533723"
	I1126 20:42:57.225948  184902 cri.go:89] found id: "a84e4d20f1907030703fc54a2a88bc2779dec332e6e8415d049b55a34abd0119"
	I1126 20:42:57.225951  184902 cri.go:89] found id: "6dffcf8b996742928728e2c585061644cc362bcb92cdff0791c4434cf0f2073a"
	I1126 20:42:57.225954  184902 cri.go:89] found id: "6f73d60362531c85177302c22f2f1558a8f9f96309baa3cca8ee2a994661c583"
	I1126 20:42:57.225957  184902 cri.go:89] found id: "97381f7b321c19f78df8e35bcd215fb879395945793d05255aa19eedfec476e0"
	I1126 20:42:57.225965  184902 cri.go:89] found id: "c11d4d76b5030322394f2928ebbca2cdde33bb90f61362d7dee70fa18b14711d"
	I1126 20:42:57.225969  184902 cri.go:89] found id: "145cb6afa55034a23db4a9ad4ef5f1ae8d82b6d44e24936232513aa2bf8ae758"
	I1126 20:42:57.225972  184902 cri.go:89] found id: "a7d9dc021a8d9179b9b73a643682a1364e4deea5cdd586389fefaa57bd0bf601"
	I1126 20:42:57.225976  184902 cri.go:89] found id: "5358710efec2a46ce31c272e0d7f8949694cd7300a389f2e5ef3016fa8458d3b"
	I1126 20:42:57.225983  184902 cri.go:89] found id: "db11bad774b4a4bfedcd139e4ff4e88d55fb014c71e7cc7cc2dd585051987b3a"
	I1126 20:42:57.225987  184902 cri.go:89] found id: ""
	I1126 20:42:57.226037  184902 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:42:57.240337  184902 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:42:57Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:42:57.240421  184902 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:42:57.251325  184902 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:42:57.251350  184902 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:42:57.251402  184902 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:42:57.259349  184902 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:42:57.259966  184902 kubeconfig.go:125] found "pause-166757" server: "https://192.168.85.2:8443"
	I1126 20:42:57.260754  184902 kapi.go:59] client config for pause-166757: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:42:57.261233  184902 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1126 20:42:57.261256  184902 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1126 20:42:57.261268  184902 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1126 20:42:57.261273  184902 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1126 20:42:57.261277  184902 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1126 20:42:57.261538  184902 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:42:57.284656  184902 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1126 20:42:57.284726  184902 kubeadm.go:602] duration metric: took 33.369518ms to restartPrimaryControlPlane
	I1126 20:42:57.284750  184902 kubeadm.go:403] duration metric: took 103.278284ms to StartCluster
	I1126 20:42:57.284801  184902 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:42:57.284910  184902 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:42:57.286078  184902 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:42:57.286378  184902 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:42:57.286791  184902 config.go:182] Loaded profile config "pause-166757": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:42:57.287038  184902 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:42:57.291494  184902 out.go:179] * Verifying Kubernetes components...
	I1126 20:42:57.291589  184902 out.go:179] * Enabled addons: 
	I1126 20:42:53.720285  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:42:53.720760  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:42:53.720812  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:42:53.720877  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:42:53.757361  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:53.757382  174302 cri.go:89] found id: ""
	I1126 20:42:53.757390  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:42:53.757454  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:53.760881  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:42:53.760949  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:42:53.802367  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:53.802399  174302 cri.go:89] found id: ""
	I1126 20:42:53.802408  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:42:53.802465  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:53.806128  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:42:53.806223  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:42:53.844172  174302 cri.go:89] found id: ""
	I1126 20:42:53.844200  174302 logs.go:282] 0 containers: []
	W1126 20:42:53.844209  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:42:53.844215  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:42:53.844275  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:42:53.886213  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:53.886232  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:53.886238  174302 cri.go:89] found id: ""
	I1126 20:42:53.886245  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:42:53.886300  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:53.889899  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:53.893465  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:42:53.893536  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:42:53.930218  174302 cri.go:89] found id: ""
	I1126 20:42:53.930239  174302 logs.go:282] 0 containers: []
	W1126 20:42:53.930247  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:42:53.930254  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:42:53.930310  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:42:53.966731  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:53.966753  174302 cri.go:89] found id: ""
	I1126 20:42:53.966761  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:42:53.966822  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:53.970291  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:42:53.970362  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:42:54.013498  174302 cri.go:89] found id: ""
	I1126 20:42:54.013525  174302 logs.go:282] 0 containers: []
	W1126 20:42:54.013535  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:42:54.013544  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:42:54.014189  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:42:54.052950  174302 cri.go:89] found id: ""
	I1126 20:42:54.052977  174302 logs.go:282] 0 containers: []
	W1126 20:42:54.052986  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:42:54.052999  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:42:54.053011  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:54.096941  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:42:54.096969  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:54.145283  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:42:54.145315  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:54.181898  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:42:54.181949  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:42:54.223107  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:42:54.223137  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:42:54.358343  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:42:54.358389  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:42:54.375057  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:42:54.375094  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:42:54.449955  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:42:54.450020  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:42:54.450041  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:54.555317  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:42:54.555352  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:54.592201  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:42:54.592229  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:42:57.166230  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:42:57.166658  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:42:57.166701  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:42:57.166755  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:42:57.212890  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:57.212913  174302 cri.go:89] found id: ""
	I1126 20:42:57.212922  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:42:57.212976  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:57.217238  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:42:57.217318  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:42:57.280838  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:57.280861  174302 cri.go:89] found id: ""
	I1126 20:42:57.280868  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:42:57.280921  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:57.285007  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:42:57.285069  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:42:57.328290  174302 cri.go:89] found id: ""
	I1126 20:42:57.328314  174302 logs.go:282] 0 containers: []
	W1126 20:42:57.328323  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:42:57.328329  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:42:57.328388  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:42:57.389340  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:57.389362  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:57.389367  174302 cri.go:89] found id: ""
	I1126 20:42:57.389373  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:42:57.389429  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:57.394208  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:57.397814  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:42:57.397884  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:42:57.452645  174302 cri.go:89] found id: ""
	I1126 20:42:57.452670  174302 logs.go:282] 0 containers: []
	W1126 20:42:57.452679  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:42:57.452685  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:42:57.452742  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:42:57.507191  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:57.507214  174302 cri.go:89] found id: ""
	I1126 20:42:57.507222  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:42:57.507286  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:42:57.513611  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:42:57.513695  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:42:57.564709  174302 cri.go:89] found id: ""
	I1126 20:42:57.564731  174302 logs.go:282] 0 containers: []
	W1126 20:42:57.564739  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:42:57.564747  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:42:57.564803  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:42:57.628386  174302 cri.go:89] found id: ""
	I1126 20:42:57.628410  174302 logs.go:282] 0 containers: []
	W1126 20:42:57.628419  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:42:57.628433  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:42:57.628444  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:42:57.805082  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:42:57.805116  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:42:57.295164  184902 addons.go:530] duration metric: took 8.106743ms for enable addons: enabled=[]
	I1126 20:42:57.295301  184902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:42:57.479092  184902 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:42:57.497286  184902 node_ready.go:35] waiting up to 6m0s for node "pause-166757" to be "Ready" ...
	I1126 20:42:57.833447  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:42:57.833475  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:42:57.980055  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:42:57.980074  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:42:57.980087  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:42:58.044843  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:42:58.044874  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:42:58.137541  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:42:58.137572  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:42:58.209732  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:42:58.209764  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:42:58.280534  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:42:58.280562  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:42:58.379553  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:42:58.379591  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:42:58.520646  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:42:58.520682  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:01.107107  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:43:01.107484  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:43:01.107532  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:43:01.107588  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:43:01.171505  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:01.171528  174302 cri.go:89] found id: ""
	I1126 20:43:01.171537  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:43:01.171591  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:01.175292  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:43:01.175355  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:43:01.236742  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:01.236766  174302 cri.go:89] found id: ""
	I1126 20:43:01.236774  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:43:01.236833  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:01.240521  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:43:01.240593  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:43:01.298912  174302 cri.go:89] found id: ""
	I1126 20:43:01.298937  174302 logs.go:282] 0 containers: []
	W1126 20:43:01.298946  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:43:01.298951  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:43:01.299007  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:43:01.377570  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:01.377593  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:01.377603  174302 cri.go:89] found id: ""
	I1126 20:43:01.377610  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:43:01.377667  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:01.381411  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:01.385771  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:43:01.385841  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:43:01.450707  174302 cri.go:89] found id: ""
	I1126 20:43:01.450731  174302 logs.go:282] 0 containers: []
	W1126 20:43:01.450739  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:43:01.450745  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:43:01.450802  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:43:01.529380  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:01.529435  174302 cri.go:89] found id: ""
	I1126 20:43:01.529444  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:43:01.529523  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:01.534324  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:43:01.534404  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:43:01.635643  174302 cri.go:89] found id: ""
	I1126 20:43:01.635668  174302 logs.go:282] 0 containers: []
	W1126 20:43:01.635678  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:43:01.635684  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:43:01.635748  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:43:01.694671  174302 cri.go:89] found id: ""
	I1126 20:43:01.694696  174302 logs.go:282] 0 containers: []
	W1126 20:43:01.694705  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:43:01.694720  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:43:01.694732  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:43:01.820274  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:43:01.820299  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:43:01.820318  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:01.921700  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:43:01.921736  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:02.066016  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:43:02.066051  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:02.138225  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:43:02.138299  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:43:02.219876  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:43:02.219959  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:43:02.358191  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:43:02.358267  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:43:02.375546  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:43:02.375617  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:02.424582  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:43:02.424754  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:02.477787  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:43:02.477855  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:02.901097  184902 node_ready.go:49] node "pause-166757" is "Ready"
	I1126 20:43:02.901125  184902 node_ready.go:38] duration metric: took 5.403810504s for node "pause-166757" to be "Ready" ...
	I1126 20:43:02.901139  184902 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:43:02.901194  184902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:43:02.919440  184902 api_server.go:72] duration metric: took 5.633001938s to wait for apiserver process to appear ...
	I1126 20:43:02.919463  184902 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:43:02.919481  184902 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:43:02.982654  184902 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:43:02.982684  184902 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:43:05.082394  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:43:05.082873  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:43:05.082924  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:43:05.082994  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:43:05.119473  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:05.119495  174302 cri.go:89] found id: ""
	I1126 20:43:05.119503  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:43:05.119559  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:05.123196  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:43:05.123272  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:43:05.162505  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:05.162523  174302 cri.go:89] found id: ""
	I1126 20:43:05.162530  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:43:05.162589  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:05.166115  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:43:05.166188  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:43:05.203289  174302 cri.go:89] found id: ""
	I1126 20:43:05.203313  174302 logs.go:282] 0 containers: []
	W1126 20:43:05.203322  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:43:05.203330  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:43:05.203386  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:43:05.244301  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:05.244323  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:05.244328  174302 cri.go:89] found id: ""
	I1126 20:43:05.244335  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:43:05.244390  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:05.248408  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:05.251623  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:43:05.251688  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:43:05.290984  174302 cri.go:89] found id: ""
	I1126 20:43:05.291004  174302 logs.go:282] 0 containers: []
	W1126 20:43:05.291012  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:43:05.291019  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:43:05.291083  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:43:05.336638  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:05.336707  174302 cri.go:89] found id: ""
	I1126 20:43:05.336730  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:43:05.336806  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:05.340536  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:43:05.340605  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:43:05.383695  174302 cri.go:89] found id: ""
	I1126 20:43:05.383771  174302 logs.go:282] 0 containers: []
	W1126 20:43:05.383788  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:43:05.383795  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:43:05.383854  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:43:05.422989  174302 cri.go:89] found id: ""
	I1126 20:43:05.423011  174302 logs.go:282] 0 containers: []
	W1126 20:43:05.423020  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:43:05.423033  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:43:05.423046  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:05.467648  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:43:05.467677  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:05.560258  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:43:05.560299  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:05.615215  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:43:05.615291  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:43:05.633340  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:43:05.633368  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:05.691795  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:43:05.691825  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:05.739736  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:43:05.739766  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:05.775317  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:43:05.775343  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:43:05.846454  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:43:05.846485  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:43:05.974670  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:43:05.974707  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:43:06.050421  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:43:03.420003  184902 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:43:03.429479  184902 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:43:03.429506  184902 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:43:03.920165  184902 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:43:03.928468  184902 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:43:03.928539  184902 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:43:04.420322  184902 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:43:04.428545  184902 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1126 20:43:04.429663  184902 api_server.go:141] control plane version: v1.34.1
	I1126 20:43:04.429721  184902 api_server.go:131] duration metric: took 1.510250692s to wait for apiserver health ...
	I1126 20:43:04.429737  184902 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:43:04.433108  184902 system_pods.go:59] 7 kube-system pods found
	I1126 20:43:04.433151  184902 system_pods.go:61] "coredns-66bc5c9577-f8dk5" [1e650291-05a3-45a5-9886-938e718690d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:43:04.433160  184902 system_pods.go:61] "etcd-pause-166757" [1d89bc54-cd9f-4b6d-a8dd-859d96a1c436] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:43:04.433172  184902 system_pods.go:61] "kindnet-bdwwv" [f354cff5-9bb8-4013-9902-e4e72447beca] Running
	I1126 20:43:04.433178  184902 system_pods.go:61] "kube-apiserver-pause-166757" [e5703723-3967-4fb2-a8fd-83cdf9aeef3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:43:04.433183  184902 system_pods.go:61] "kube-controller-manager-pause-166757" [59567e7b-f221-4488-98dd-02435a3fd7e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:43:04.433190  184902 system_pods.go:61] "kube-proxy-tlg46" [0c1d444f-b32a-44c7-a1eb-ed3e962ba28f] Running
	I1126 20:43:04.433195  184902 system_pods.go:61] "kube-scheduler-pause-166757" [8f0ec421-1e52-447d-8235-08a1f90674a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:43:04.433205  184902 system_pods.go:74] duration metric: took 3.462839ms to wait for pod list to return data ...
	I1126 20:43:04.433217  184902 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:43:04.435916  184902 default_sa.go:45] found service account: "default"
	I1126 20:43:04.435938  184902 default_sa.go:55] duration metric: took 2.716087ms for default service account to be created ...
	I1126 20:43:04.435948  184902 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:43:04.438679  184902 system_pods.go:86] 7 kube-system pods found
	I1126 20:43:04.438716  184902 system_pods.go:89] "coredns-66bc5c9577-f8dk5" [1e650291-05a3-45a5-9886-938e718690d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:43:04.438757  184902 system_pods.go:89] "etcd-pause-166757" [1d89bc54-cd9f-4b6d-a8dd-859d96a1c436] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:43:04.438765  184902 system_pods.go:89] "kindnet-bdwwv" [f354cff5-9bb8-4013-9902-e4e72447beca] Running
	I1126 20:43:04.438775  184902 system_pods.go:89] "kube-apiserver-pause-166757" [e5703723-3967-4fb2-a8fd-83cdf9aeef3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:43:04.438786  184902 system_pods.go:89] "kube-controller-manager-pause-166757" [59567e7b-f221-4488-98dd-02435a3fd7e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:43:04.438797  184902 system_pods.go:89] "kube-proxy-tlg46" [0c1d444f-b32a-44c7-a1eb-ed3e962ba28f] Running
	I1126 20:43:04.438804  184902 system_pods.go:89] "kube-scheduler-pause-166757" [8f0ec421-1e52-447d-8235-08a1f90674a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:43:04.438817  184902 system_pods.go:126] duration metric: took 2.864053ms to wait for k8s-apps to be running ...
	I1126 20:43:04.438826  184902 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:43:04.438879  184902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:43:04.450936  184902 system_svc.go:56] duration metric: took 12.101167ms WaitForService to wait for kubelet
	I1126 20:43:04.450965  184902 kubeadm.go:587] duration metric: took 7.164529784s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:43:04.450985  184902 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:43:04.453993  184902 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:43:04.454026  184902 node_conditions.go:123] node cpu capacity is 2
	I1126 20:43:04.454038  184902 node_conditions.go:105] duration metric: took 3.026443ms to run NodePressure ...
	I1126 20:43:04.454051  184902 start.go:242] waiting for startup goroutines ...
	I1126 20:43:04.454058  184902 start.go:247] waiting for cluster config update ...
	I1126 20:43:04.454067  184902 start.go:256] writing updated cluster config ...
	I1126 20:43:04.454405  184902 ssh_runner.go:195] Run: rm -f paused
	I1126 20:43:04.457661  184902 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:43:04.458309  184902 kapi.go:59] client config for pause-166757: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/client.crt", KeyFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/profiles/pause-166757/client.key", CAFile:"/home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1126 20:43:04.461283  184902 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f8dk5" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:43:06.466904  184902 pod_ready.go:104] pod "coredns-66bc5c9577-f8dk5" is not "Ready", error: <nil>
	I1126 20:43:08.550605  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:43:08.551025  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:43:08.551078  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:43:08.551141  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:43:08.590294  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:08.590313  174302 cri.go:89] found id: ""
	I1126 20:43:08.590320  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:43:08.590376  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:08.594009  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:43:08.594086  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:43:08.636695  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:08.636716  174302 cri.go:89] found id: ""
	I1126 20:43:08.636725  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:43:08.636791  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:08.641394  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:43:08.641461  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:43:08.679491  174302 cri.go:89] found id: ""
	I1126 20:43:08.679514  174302 logs.go:282] 0 containers: []
	W1126 20:43:08.679523  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:43:08.679529  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:43:08.679585  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:43:08.717328  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:08.717347  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:08.717352  174302 cri.go:89] found id: ""
	I1126 20:43:08.717358  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:43:08.717423  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:08.720885  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:08.724684  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:43:08.724751  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:43:08.760902  174302 cri.go:89] found id: ""
	I1126 20:43:08.760975  174302 logs.go:282] 0 containers: []
	W1126 20:43:08.760998  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:43:08.761017  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:43:08.761104  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:43:08.798425  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:08.798445  174302 cri.go:89] found id: ""
	I1126 20:43:08.798454  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:43:08.798512  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:08.801816  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:43:08.801878  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:43:08.837958  174302 cri.go:89] found id: ""
	I1126 20:43:08.837984  174302 logs.go:282] 0 containers: []
	W1126 20:43:08.837999  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:43:08.838006  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:43:08.838064  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:43:08.873506  174302 cri.go:89] found id: ""
	I1126 20:43:08.873533  174302 logs.go:282] 0 containers: []
	W1126 20:43:08.873542  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:43:08.873556  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:43:08.873567  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:08.919435  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:43:08.919462  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:08.957083  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:43:08.957112  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:08.998769  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:43:08.998797  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:43:09.124234  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:43:09.124306  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:43:09.202267  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:43:09.202296  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:43:09.202309  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:09.252073  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:43:09.252104  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:09.352990  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:43:09.353025  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:43:09.423712  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:43:09.423747  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:09.471893  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:43:09.471920  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:43:11.987497  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:43:11.987946  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:43:11.988018  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:43:11.988088  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:43:12.037806  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:12.037831  174302 cri.go:89] found id: ""
	I1126 20:43:12.037838  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:43:12.037906  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:12.042427  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:43:12.042506  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:43:12.086616  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:12.086681  174302 cri.go:89] found id: ""
	I1126 20:43:12.086706  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:43:12.086788  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:12.090393  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:43:12.090511  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:43:12.127765  174302 cri.go:89] found id: ""
	I1126 20:43:12.127831  174302 logs.go:282] 0 containers: []
	W1126 20:43:12.127855  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:43:12.127873  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:43:12.127954  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:43:12.166923  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:12.166986  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:12.166998  174302 cri.go:89] found id: ""
	I1126 20:43:12.167013  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:43:12.167078  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:12.170549  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:12.173838  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:43:12.173916  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:43:12.211738  174302 cri.go:89] found id: ""
	I1126 20:43:12.211764  174302 logs.go:282] 0 containers: []
	W1126 20:43:12.211785  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:43:12.211792  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:43:12.211858  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:43:12.250173  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:12.250193  174302 cri.go:89] found id: ""
	I1126 20:43:12.250200  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:43:12.250254  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:12.253878  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:43:12.254002  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:43:12.289215  174302 cri.go:89] found id: ""
	I1126 20:43:12.289236  174302 logs.go:282] 0 containers: []
	W1126 20:43:12.289244  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:43:12.289251  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:43:12.289307  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:43:12.324267  174302 cri.go:89] found id: ""
	I1126 20:43:12.324290  174302 logs.go:282] 0 containers: []
	W1126 20:43:12.324298  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:43:12.324313  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:43:12.324326  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:43:12.447183  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:43:12.447218  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:43:12.462865  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:43:12.462896  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:43:12.539169  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:43:12.539235  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:43:12.539263  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:12.582542  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:43:12.582573  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:12.693482  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:43:12.693515  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:12.733460  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:43:12.733529  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:43:12.807128  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:43:12.807163  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	W1126 20:43:08.467337  184902 pod_ready.go:104] pod "coredns-66bc5c9577-f8dk5" is not "Ready", error: <nil>
	W1126 20:43:10.966812  184902 pod_ready.go:104] pod "coredns-66bc5c9577-f8dk5" is not "Ready", error: <nil>
	W1126 20:43:12.967534  184902 pod_ready.go:104] pod "coredns-66bc5c9577-f8dk5" is not "Ready", error: <nil>
	I1126 20:43:12.849793  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:43:12.849823  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:12.891691  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:43:12.891724  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:15.443160  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:43:15.443645  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:43:15.443699  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:43:15.443762  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:43:15.486735  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:15.486754  174302 cri.go:89] found id: ""
	I1126 20:43:15.486762  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:43:15.486816  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:15.490632  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:43:15.490699  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:43:15.533870  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:15.533978  174302 cri.go:89] found id: ""
	I1126 20:43:15.534003  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:43:15.534080  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:15.537762  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:43:15.537891  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:43:15.574630  174302 cri.go:89] found id: ""
	I1126 20:43:15.574704  174302 logs.go:282] 0 containers: []
	W1126 20:43:15.574733  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:43:15.574747  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:43:15.574810  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:43:15.611230  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:15.611255  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:15.611260  174302 cri.go:89] found id: ""
	I1126 20:43:15.611275  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:43:15.611331  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:15.616229  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:15.620649  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:43:15.620720  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:43:15.655879  174302 cri.go:89] found id: ""
	I1126 20:43:15.655955  174302 logs.go:282] 0 containers: []
	W1126 20:43:15.655970  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:43:15.655978  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:43:15.656038  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:43:15.692745  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:15.692767  174302 cri.go:89] found id: ""
	I1126 20:43:15.692775  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:43:15.692830  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:15.696280  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:43:15.696346  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:43:15.735490  174302 cri.go:89] found id: ""
	I1126 20:43:15.735511  174302 logs.go:282] 0 containers: []
	W1126 20:43:15.735520  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:43:15.735526  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:43:15.735586  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:43:15.772361  174302 cri.go:89] found id: ""
	I1126 20:43:15.772385  174302 logs.go:282] 0 containers: []
	W1126 20:43:15.772394  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:43:15.772415  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:43:15.772427  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:43:15.908295  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:43:15.908395  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:43:15.931507  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:43:15.931606  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:15.979016  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:43:15.979044  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:16.020683  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:43:16.020716  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:43:16.092286  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:43:16.092324  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:16.132712  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:43:16.132753  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:43:16.218133  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:43:16.218152  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:43:16.218165  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:16.261621  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:43:16.261654  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:16.313384  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:43:16.313422  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	W1126 20:43:15.468249  184902 pod_ready.go:104] pod "coredns-66bc5c9577-f8dk5" is not "Ready", error: <nil>
	I1126 20:43:17.966489  184902 pod_ready.go:94] pod "coredns-66bc5c9577-f8dk5" is "Ready"
	I1126 20:43:17.966516  184902 pod_ready.go:86] duration metric: took 13.505205833s for pod "coredns-66bc5c9577-f8dk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:17.968859  184902 pod_ready.go:83] waiting for pod "etcd-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:17.973158  184902 pod_ready.go:94] pod "etcd-pause-166757" is "Ready"
	I1126 20:43:17.973183  184902 pod_ready.go:86] duration metric: took 4.297951ms for pod "etcd-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:17.975461  184902 pod_ready.go:83] waiting for pod "kube-apiserver-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:17.979758  184902 pod_ready.go:94] pod "kube-apiserver-pause-166757" is "Ready"
	I1126 20:43:17.979784  184902 pod_ready.go:86] duration metric: took 4.302061ms for pod "kube-apiserver-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:17.982124  184902 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:18.164898  184902 pod_ready.go:94] pod "kube-controller-manager-pause-166757" is "Ready"
	I1126 20:43:18.164922  184902 pod_ready.go:86] duration metric: took 182.776758ms for pod "kube-controller-manager-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:18.365010  184902 pod_ready.go:83] waiting for pod "kube-proxy-tlg46" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:18.765072  184902 pod_ready.go:94] pod "kube-proxy-tlg46" is "Ready"
	I1126 20:43:18.765099  184902 pod_ready.go:86] duration metric: took 400.064696ms for pod "kube-proxy-tlg46" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:18.969626  184902 pod_ready.go:83] waiting for pod "kube-scheduler-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:19.365259  184902 pod_ready.go:94] pod "kube-scheduler-pause-166757" is "Ready"
	I1126 20:43:19.365282  184902 pod_ready.go:86] duration metric: took 395.635183ms for pod "kube-scheduler-pause-166757" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:43:19.365294  184902 pod_ready.go:40] duration metric: took 14.907577048s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:43:19.440494  184902 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 20:43:19.445822  184902 out.go:179] * Done! kubectl is now configured to use "pause-166757" cluster and "default" namespace by default
	I1126 20:43:18.917286  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:43:18.917792  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:43:18.917848  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:43:18.917939  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:43:18.969911  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:18.969953  174302 cri.go:89] found id: ""
	I1126 20:43:18.969965  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:43:18.970020  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:18.973616  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:43:18.973685  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:43:19.018471  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:19.018503  174302 cri.go:89] found id: ""
	I1126 20:43:19.018513  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:43:19.018572  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:19.022469  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:43:19.022540  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:43:19.059748  174302 cri.go:89] found id: ""
	I1126 20:43:19.059771  174302 logs.go:282] 0 containers: []
	W1126 20:43:19.059780  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:43:19.059786  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:43:19.059848  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1126 20:43:19.103616  174302 cri.go:89] found id: "e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:19.103640  174302 cri.go:89] found id: "eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:19.103647  174302 cri.go:89] found id: ""
	I1126 20:43:19.103656  174302 logs.go:282] 2 containers: [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599]
	I1126 20:43:19.103715  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:19.107294  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:19.110741  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1126 20:43:19.110809  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1126 20:43:19.147595  174302 cri.go:89] found id: ""
	I1126 20:43:19.147616  174302 logs.go:282] 0 containers: []
	W1126 20:43:19.147625  174302 logs.go:284] No container was found matching "kube-proxy"
	I1126 20:43:19.147631  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1126 20:43:19.147692  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1126 20:43:19.194957  174302 cri.go:89] found id: "57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:19.195044  174302 cri.go:89] found id: ""
	I1126 20:43:19.195067  174302 logs.go:282] 1 containers: [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad]
	I1126 20:43:19.195153  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:19.199521  174302 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1126 20:43:19.199616  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1126 20:43:19.237761  174302 cri.go:89] found id: ""
	I1126 20:43:19.237783  174302 logs.go:282] 0 containers: []
	W1126 20:43:19.237792  174302 logs.go:284] No container was found matching "kindnet"
	I1126 20:43:19.237798  174302 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1126 20:43:19.237855  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1126 20:43:19.280140  174302 cri.go:89] found id: ""
	I1126 20:43:19.280161  174302 logs.go:282] 0 containers: []
	W1126 20:43:19.280170  174302 logs.go:284] No container was found matching "storage-provisioner"
	I1126 20:43:19.280184  174302 logs.go:123] Gathering logs for kubelet ...
	I1126 20:43:19.280195  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1126 20:43:19.413097  174302 logs.go:123] Gathering logs for dmesg ...
	I1126 20:43:19.413186  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1126 20:43:19.433480  174302 logs.go:123] Gathering logs for kube-apiserver [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e] ...
	I1126 20:43:19.433554  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:19.519223  174302 logs.go:123] Gathering logs for kube-scheduler [e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f] ...
	I1126 20:43:19.519259  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e13574c8abee1d87785c8b5fc20415472f86e57adae0e533ee05f7cc6cf84d5f"
	I1126 20:43:19.683964  174302 logs.go:123] Gathering logs for CRI-O ...
	I1126 20:43:19.683995  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1126 20:43:19.764485  174302 logs.go:123] Gathering logs for describe nodes ...
	I1126 20:43:19.764519  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1126 20:43:19.851282  174302 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1126 20:43:19.851304  174302 logs.go:123] Gathering logs for etcd [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c] ...
	I1126 20:43:19.851317  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:19.945471  174302 logs.go:123] Gathering logs for kube-scheduler [eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599] ...
	I1126 20:43:19.945504  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb68c52437a470572af2680dcb3ac17df0c6a016b08c4c647eb235daec616599"
	I1126 20:43:19.995157  174302 logs.go:123] Gathering logs for kube-controller-manager [57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad] ...
	I1126 20:43:19.995233  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57b0c83d0be69628ac04534661a9462b146e050c903ad94d8e4737ad703a54ad"
	I1126 20:43:20.048461  174302 logs.go:123] Gathering logs for container status ...
	I1126 20:43:20.048536  174302 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1126 20:43:22.626346  174302 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:43:22.626741  174302 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1126 20:43:22.626782  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1126 20:43:22.626838  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1126 20:43:22.666573  174302 cri.go:89] found id: "d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e"
	I1126 20:43:22.666595  174302 cri.go:89] found id: ""
	I1126 20:43:22.666604  174302 logs.go:282] 1 containers: [d61a6631759a355291d66dc7f2a3e76ab903750598198907b0fc08ee7b83958e]
	I1126 20:43:22.666661  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:22.670367  174302 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1126 20:43:22.670438  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1126 20:43:22.719142  174302 cri.go:89] found id: "380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c"
	I1126 20:43:22.719161  174302 cri.go:89] found id: ""
	I1126 20:43:22.719169  174302 logs.go:282] 1 containers: [380ee7ffb98a260d507a2c142af3b373c1dcf9f2a7291b0f83af3f30974ccc2c]
	I1126 20:43:22.719224  174302 ssh_runner.go:195] Run: which crictl
	I1126 20:43:22.723666  174302 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1126 20:43:22.723727  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1126 20:43:22.788379  174302 cri.go:89] found id: ""
	I1126 20:43:22.788401  174302 logs.go:282] 0 containers: []
	W1126 20:43:22.788409  174302 logs.go:284] No container was found matching "coredns"
	I1126 20:43:22.788416  174302 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1126 20:43:22.788479  174302 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	
	
	==> CRI-O <==
	Nov 26 20:42:58 pause-166757 crio[2213]: time="2025-11-26T20:42:58.00403624Z" level=info msg="Removed container 6f73d60362531c85177302c22f2f1558a8f9f96309baa3cca8ee2a994661c583: kube-system/coredns-66bc5c9577-f8dk5/coredns" id=9781c7a0-d6aa-4dcd-b511-8d7434556224 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.275965566Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.279230408Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.279263785Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.279290672Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.282702142Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.282731326Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.282746349Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.285578363Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.285610017Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.285660181Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.288377073Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.28840981Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.288431479Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.291279631Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:43:08 pause-166757 crio[2213]: time="2025-11-26T20:43:08.291308717Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.006154439Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=e40179f6-b012-4fb0-a6f1-23ef4217eb18 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.007500275Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=b6ea76de-7df6-450d-aa6f-5c06c827d91b name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.011088764Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-f8dk5/coredns" id=7af11dde-15f2-4248-90e8-e58d8748a8a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.011313871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.024868828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.025527363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.044990629Z" level=info msg="Created container ff6913ff92f7a33d5f79b7e72cde6b3145439ac3dd25b28de6bda5ca2d449f5d: kube-system/coredns-66bc5c9577-f8dk5/coredns" id=7af11dde-15f2-4248-90e8-e58d8748a8a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.046091748Z" level=info msg="Starting container: ff6913ff92f7a33d5f79b7e72cde6b3145439ac3dd25b28de6bda5ca2d449f5d" id=002fd396-d5c6-4677-b3ca-c3458feb877c name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:43:17 pause-166757 crio[2213]: time="2025-11-26T20:43:17.047734072Z" level=info msg="Started container" PID=2774 containerID=ff6913ff92f7a33d5f79b7e72cde6b3145439ac3dd25b28de6bda5ca2d449f5d description=kube-system/coredns-66bc5c9577-f8dk5/coredns id=002fd396-d5c6-4677-b3ca-c3458feb877c name=/runtime.v1.RuntimeService/StartContainer sandboxID=fae3cfb4df00460a4e54af77063c6a86c8856706f9296ed1b30e0b125df0932b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ff6913ff92f7a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   8 seconds ago       Running             coredns                   2                   fae3cfb4df004       coredns-66bc5c9577-f8dk5               kube-system
	ff0a5f1227925       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   27 seconds ago      Running             kindnet-cni               2                   3e7f0cdb76091       kindnet-bdwwv                          kube-system
	bf90263bd4f1c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   27 seconds ago      Running             kube-proxy                2                   c266ae892eda7       kube-proxy-tlg46                       kube-system
	eac939c08bc98       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   27 seconds ago      Running             etcd                      2                   620aab564233c       etcd-pause-166757                      kube-system
	8280393973d71       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   27 seconds ago      Running             kube-apiserver            2                   33a7b14ffdf2c       kube-apiserver-pause-166757            kube-system
	4f7996a732bd7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   27 seconds ago      Running             kube-controller-manager   2                   67ae632467a48       kube-controller-manager-pause-166757   kube-system
	091ca865eebb2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   27 seconds ago      Running             kube-scheduler            2                   0f00b3b379c4d       kube-scheduler-pause-166757            kube-system
	2db020b8c32b5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   39 seconds ago      Exited              kube-apiserver            1                   33a7b14ffdf2c       kube-apiserver-pause-166757            kube-system
	0db000c6d2320       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   40 seconds ago      Exited              kube-scheduler            1                   0f00b3b379c4d       kube-scheduler-pause-166757            kube-system
	60b0ffbf35dd0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   40 seconds ago      Exited              coredns                   1                   fae3cfb4df004       coredns-66bc5c9577-f8dk5               kube-system
	d3ad91d7746bb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   40 seconds ago      Exited              kindnet-cni               1                   3e7f0cdb76091       kindnet-bdwwv                          kube-system
	4dee54f7f5168       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   40 seconds ago      Exited              kube-proxy                1                   c266ae892eda7       kube-proxy-tlg46                       kube-system
	a84e4d20f1907       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   40 seconds ago      Exited              kube-controller-manager   1                   67ae632467a48       kube-controller-manager-pause-166757   kube-system
	6dffcf8b99674       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   40 seconds ago      Exited              etcd                      1                   620aab564233c       etcd-pause-166757                      kube-system
	
	
	==> coredns [60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49685 - 64690 "HINFO IN 1303599683672573835.5790048047726244057. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004630423s
	
	
	==> coredns [ff6913ff92f7a33d5f79b7e72cde6b3145439ac3dd25b28de6bda5ca2d449f5d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41572 - 9942 "HINFO IN 1681300439164332242.8011566670377224255. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012742238s
	
	
	==> describe nodes <==
	Name:               pause-166757
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-166757
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=pause-166757
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_41_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:41:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-166757
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:43:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:43:18 +0000   Wed, 26 Nov 2025 20:41:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:43:18 +0000   Wed, 26 Nov 2025 20:41:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:43:18 +0000   Wed, 26 Nov 2025 20:41:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:43:18 +0000   Wed, 26 Nov 2025 20:42:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-166757
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                d10a9b8f-65c2-47ef-a8f7-afd4c450fae8
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-f8dk5                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     92s
	  kube-system                 etcd-pause-166757                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         99s
	  kube-system                 kindnet-bdwwv                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      93s
	  kube-system                 kube-apiserver-pause-166757             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-pause-166757    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-tlg46                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-pause-166757             100m (5%)     0 (0%)      0 (0%)           0 (0%)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 90s                  kube-proxy       
	  Normal   Starting                 22s                  kube-proxy       
	  Warning  CgroupV1                 106s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  106s (x8 over 106s)  kubelet          Node pause-166757 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x8 over 106s)  kubelet          Node pause-166757 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x8 over 106s)  kubelet          Node pause-166757 status is now: NodeHasSufficientPID
	  Normal   Starting                 99s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 99s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  98s                  kubelet          Node pause-166757 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    98s                  kubelet          Node pause-166757 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     98s                  kubelet          Node pause-166757 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           93s                  node-controller  Node pause-166757 event: Registered Node pause-166757 in Controller
	  Normal   NodeReady                50s                  kubelet          Node pause-166757 status is now: NodeReady
	  Warning  ContainerGCFailed        38s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           19s                  node-controller  Node pause-166757 event: Registered Node pause-166757 in Controller
	
	
	==> dmesg <==
	[  +3.105496] overlayfs: idmapped layers are currently not supported
	[ +37.228314] overlayfs: idmapped layers are currently not supported
	[Nov26 20:05] overlayfs: idmapped layers are currently not supported
	[Nov26 20:06] overlayfs: idmapped layers are currently not supported
	[  +3.713866] overlayfs: idmapped layers are currently not supported
	[Nov26 20:14] overlayfs: idmapped layers are currently not supported
	[Nov26 20:16] overlayfs: idmapped layers are currently not supported
	[Nov26 20:21] overlayfs: idmapped layers are currently not supported
	[ +33.563196] overlayfs: idmapped layers are currently not supported
	[Nov26 20:23] overlayfs: idmapped layers are currently not supported
	[Nov26 20:24] overlayfs: idmapped layers are currently not supported
	[Nov26 20:25] overlayfs: idmapped layers are currently not supported
	[Nov26 20:27] overlayfs: idmapped layers are currently not supported
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6dffcf8b996742928728e2c585061644cc362bcb92cdff0791c4434cf0f2073a] <==
	{"level":"info","ts":"2025-11-26T20:42:45.871728Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-26T20:42:45.871916Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:42:45.872123Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-26T20:42:45.872166Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-11-26T20:42:45.872946Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-11-26T20:42:45.873069Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-26T20:42:45.899842Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-26T20:42:46.742705Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-26T20:42:46.742747Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-166757","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-26T20:42:46.742888Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-26T20:42:46.744098Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-26T20:42:46.746171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T20:42:46.746622Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-11-26T20:42:46.747022Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-26T20:42:46.747136Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-26T20:42:46.747951Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T20:42:46.747993Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-26T20:42:46.748034Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-26T20:42:46.748160Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-26T20:42:46.748196Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-26T20:42:46.748207Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T20:42:46.758820Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-26T20:42:46.758972Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-26T20:42:46.766140Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-26T20:42:46.766176Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-166757","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [eac939c08bc98665f4bf51748fc29d22412f9ee4271d7560afcbe9d5813486ae] <==
	{"level":"warn","ts":"2025-11-26T20:43:00.979727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.006110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.043979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.066035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.103407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.124080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.155362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.189427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.232399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.287745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.313552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.378925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.422100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.465572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.512121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.542004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.569049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.596850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.622164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.672596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.701945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.748810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.817065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:01.846156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:43:02.016051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56016","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:43:25 up  1:25,  0 user,  load average: 1.29, 1.92, 1.83
	Linux pause-166757 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d3ad91d7746bb4b386071782c6f36969bb925be7fbcfcd4d33a447d23efb7975] <==
	I1126 20:42:45.724437       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:42:45.724672       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:42:45.724843       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:42:45.724960       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:42:45.725001       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:42:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:42:45.869574       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:42:45.869656       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:42:45.869695       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:42:45.870458       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:42:45.926232       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:42:45.926435       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 20:42:45.926576       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:42:45.926740       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	
	
	==> kindnet [ff0a5f1227925b4bdb72055f1ac096149718cb675cab7d6d694aa06631f5ccea] <==
	I1126 20:42:58.044849       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:42:58.057393       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:42:58.057515       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:42:58.057528       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:42:58.057543       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:42:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:42:58.273360       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:42:58.282050       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:42:58.282127       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:42:58.282268       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:43:03.083017       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:43:03.083144       1 metrics.go:72] Registering metrics
	I1126 20:43:03.083251       1 controller.go:711] "Syncing nftables rules"
	I1126 20:43:08.275614       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:43:08.275681       1 main.go:301] handling current node
	I1126 20:43:18.272593       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:43:18.272628       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2db020b8c32b522251976eced59d8bb3bac5adab09d141a0bf566661e506974c] <==
	I1126 20:42:45.918878       1 options.go:263] external host was not specified, using 192.168.85.2
	I1126 20:42:45.928796       1 server.go:150] Version: v1.34.1
	I1126 20:42:45.928925       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [8280393973d719432323cdf237acb2bda01b8dce41b8dffb5bd87ebc5d1dd828] <==
	I1126 20:43:03.001583       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1126 20:43:03.001703       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1126 20:43:03.001736       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1126 20:43:03.001793       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:43:03.010179       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1126 20:43:03.010335       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:43:03.019884       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1126 20:43:03.020198       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:43:03.020348       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1126 20:43:03.023833       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1126 20:43:03.024461       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1126 20:43:03.024674       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:43:03.024740       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:43:03.024773       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:43:03.024808       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:43:03.028715       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1126 20:43:03.028759       1 policy_source.go:240] refreshing policies
	I1126 20:43:03.031298       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1126 20:43:03.039746       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:43:03.711497       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:43:04.896811       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:43:06.386693       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:43:06.484922       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:43:06.536539       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:43:06.636734       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [4f7996a732bd73b5f908a785886db88ef6214a2067d6c11b1d4e1292f31b6556] <==
	I1126 20:43:06.241611       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 20:43:06.241702       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:43:06.242989       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:43:06.243076       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:43:06.243147       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-166757"
	I1126 20:43:06.243191       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1126 20:43:06.243295       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:43:06.245293       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:43:06.249164       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1126 20:43:06.250777       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:43:06.271114       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1126 20:43:06.273412       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:43:06.278329       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1126 20:43:06.278335       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:43:06.278355       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:43:06.278367       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:43:06.280534       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 20:43:06.281842       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 20:43:06.281914       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:43:06.283689       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:43:06.285784       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:43:06.288190       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:43:06.294509       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:43:06.294535       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:43:06.294544       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [a84e4d20f1907030703fc54a2a88bc2779dec332e6e8415d049b55a34abd0119] <==
	I1126 20:42:46.690158       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-proxy [4dee54f7f5168459562bdac0a84ab912b1e6d20efea644ea468f645384533723] <==
	I1126 20:42:46.515843       1 server_linux.go:53] "Using iptables proxy"
	
	
	==> kube-proxy [bf90263bd4f1cf3ae79640f3420e3512ddac538a4089f3d2dd281242570b18dc] <==
	I1126 20:42:58.651226       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:42:59.651289       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:43:03.051926       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:43:03.052029       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1126 20:43:03.052153       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:43:03.081316       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:43:03.081452       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:43:03.093544       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:43:03.094034       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:43:03.094103       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:43:03.103366       1 config.go:200] "Starting service config controller"
	I1126 20:43:03.103439       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:43:03.103483       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:43:03.103509       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:43:03.103547       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:43:03.103573       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:43:03.113766       1 config.go:309] "Starting node config controller"
	I1126 20:43:03.113851       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:43:03.114944       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:43:03.205532       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:43:03.205538       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:43:03.205567       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [091ca865eebb280db3b387e326ef44d9b1d136413786c299225e04fa0f4673c1] <==
	I1126 20:43:00.001057       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:43:02.874425       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:43:02.874468       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:43:02.874478       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:43:02.874486       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:43:02.995843       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:43:02.995962       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:43:03.003411       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:43:03.003766       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:43:03.003832       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:43:03.003893       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:43:03.104847       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [0db000c6d2320c82ec9be70d6c38cf52db881b458ac9fcbb65a9de481d9005fd] <==
	
	
	==> kubelet <==
	Nov 26 20:42:57 pause-166757 kubelet[1316]: E1126 20:42:57.706419    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-166757\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a8a9a2580b16520cc16b60787efc26f3" pod="kube-system/etcd-pause-166757"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: E1126 20:42:57.706724    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-166757\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5d0e9f4903b23930a563c698eb6239b4" pod="kube-system/kube-controller-manager-pause-166757"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: E1126 20:42:57.707020    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tlg46\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0c1d444f-b32a-44c7-a1eb-ed3e962ba28f" pod="kube-system/kube-proxy-tlg46"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: E1126 20:42:57.707386    1316 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-bdwwv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f354cff5-9bb8-4013-9902-e4e72447beca" pod="kube-system/kindnet-bdwwv"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: I1126 20:42:57.722249    1316 scope.go:117] "RemoveContainer" containerID="5358710efec2a46ce31c272e0d7f8949694cd7300a389f2e5ef3016fa8458d3b"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: I1126 20:42:57.799591    1316 scope.go:117] "RemoveContainer" containerID="db11bad774b4a4bfedcd139e4ff4e88d55fb014c71e7cc7cc2dd585051987b3a"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: I1126 20:42:57.899965    1316 scope.go:117] "RemoveContainer" containerID="97381f7b321c19f78df8e35bcd215fb879395945793d05255aa19eedfec476e0"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: I1126 20:42:57.932521    1316 scope.go:117] "RemoveContainer" containerID="c11d4d76b5030322394f2928ebbca2cdde33bb90f61362d7dee70fa18b14711d"
	Nov 26 20:42:57 pause-166757 kubelet[1316]: I1126 20:42:57.976061    1316 scope.go:117] "RemoveContainer" containerID="6f73d60362531c85177302c22f2f1558a8f9f96309baa3cca8ee2a994661c583"
	Nov 26 20:42:58 pause-166757 kubelet[1316]: I1126 20:42:58.726904    1316 scope.go:117] "RemoveContainer" containerID="60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95"
	Nov 26 20:42:58 pause-166757 kubelet[1316]: E1126 20:42:58.727501    1316 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-f8dk5_kube-system(1e650291-05a3-45a5-9886-938e718690d8)\"" pod="kube-system/coredns-66bc5c9577-f8dk5" podUID="1e650291-05a3-45a5-9886-938e718690d8"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.766613    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-166757\" is forbidden: User \"system:node:pause-166757\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" podUID="4822e8c8ac682bfa93918aca1b60b9ce" pod="kube-system/kube-scheduler-pause-166757"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.768088    1316 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-166757\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.768227    1316 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-166757\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.768300    1316 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-166757\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.820868    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-166757\" is forbidden: User \"system:node:pause-166757\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" podUID="bc338ec41690a6900749846a15a3aec1" pod="kube-system/kube-apiserver-pause-166757"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.868528    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-166757\" is forbidden: User \"system:node:pause-166757\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" podUID="a8a9a2580b16520cc16b60787efc26f3" pod="kube-system/etcd-pause-166757"
	Nov 26 20:43:02 pause-166757 kubelet[1316]: E1126 20:43:02.938431    1316 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-166757\" is forbidden: User \"system:node:pause-166757\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-166757' and this object" podUID="5d0e9f4903b23930a563c698eb6239b4" pod="kube-system/kube-controller-manager-pause-166757"
	Nov 26 20:43:05 pause-166757 kubelet[1316]: I1126 20:43:05.473605    1316 scope.go:117] "RemoveContainer" containerID="60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95"
	Nov 26 20:43:05 pause-166757 kubelet[1316]: E1126 20:43:05.474264    1316 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-f8dk5_kube-system(1e650291-05a3-45a5-9886-938e718690d8)\"" pod="kube-system/coredns-66bc5c9577-f8dk5" podUID="1e650291-05a3-45a5-9886-938e718690d8"
	Nov 26 20:43:07 pause-166757 kubelet[1316]: W1126 20:43:07.271555    1316 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 26 20:43:17 pause-166757 kubelet[1316]: I1126 20:43:17.005123    1316 scope.go:117] "RemoveContainer" containerID="60b0ffbf35dd06ac1d919bad7d884dfc92df11b54586eb065a37b40392a53e95"
	Nov 26 20:43:20 pause-166757 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:43:20 pause-166757 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:43:20 pause-166757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-166757 -n pause-166757
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-166757 -n pause-166757: exit status 2 (488.888389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-166757 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-264537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-264537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (262.839751ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:46:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-264537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-264537 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-264537 describe deploy/metrics-server -n kube-system: exit status 1 (79.168886ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-264537 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-264537
helpers_test.go:243: (dbg) docker inspect old-k8s-version-264537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747",
	        "Created": "2025-11-26T20:45:36.56908992Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 201347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:45:36.632047135Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/hostname",
	        "HostsPath": "/var/lib/docker/containers/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/hosts",
	        "LogPath": "/var/lib/docker/containers/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747-json.log",
	        "Name": "/old-k8s-version-264537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-264537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-264537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747",
	                "LowerDir": "/var/lib/docker/overlay2/7051b00bcce0d8072bca16b9cd942f07c121d04f16461ee338a38ce225cd81cb-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7051b00bcce0d8072bca16b9cd942f07c121d04f16461ee338a38ce225cd81cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7051b00bcce0d8072bca16b9cd942f07c121d04f16461ee338a38ce225cd81cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7051b00bcce0d8072bca16b9cd942f07c121d04f16461ee338a38ce225cd81cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-264537",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-264537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-264537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-264537",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-264537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc89079e7eae0e2db8e2e78b56fd9fa1c6d687e17693882caf5037f92dfd95d1",
	            "SandboxKey": "/var/run/docker/netns/cc89079e7eae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-264537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:f2:d9:a5:9e:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a0df607f6641f4214aa99b2f7e135610ec93c7d857cfae2423703322c6f61751",
	                    "EndpointID": "c0f8d8469058ceb29d56ee0cccc48446487aabc97c806368a205d73cc7cc384a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-264537",
	                        "a5e16735df4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264537 -n old-k8s-version-264537
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-264537 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-264537 logs -n 25: (1.17380188s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-235709 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo containerd config dump                                                                                                                                                                                                  │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo crio config                                                                                                                                                                                                             │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ delete  │ -p cilium-235709                                                                                                                                                                                                                              │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p force-systemd-env-274518 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-274518  │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ ssh     │ force-systemd-flag-622960 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-622960 │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ delete  │ -p force-systemd-flag-622960                                                                                                                                                                                                                  │ force-systemd-flag-622960 │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-164741    │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ delete  │ -p force-systemd-env-274518                                                                                                                                                                                                                   │ force-systemd-env-274518  │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p cert-options-207115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:45 UTC │
	│ ssh     │ cert-options-207115 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ ssh     │ -p cert-options-207115 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ delete  │ -p cert-options-207115                                                                                                                                                                                                                        │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-264537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:45:30
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:45:30.864088  200894 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:45:30.864247  200894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:45:30.864272  200894 out.go:374] Setting ErrFile to fd 2...
	I1126 20:45:30.864283  200894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:45:30.864678  200894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:45:30.865199  200894 out.go:368] Setting JSON to false
	I1126 20:45:30.866282  200894 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5261,"bootTime":1764184670,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:45:30.866381  200894 start.go:143] virtualization:  
	I1126 20:45:30.870099  200894 out.go:179] * [old-k8s-version-264537] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:45:30.874735  200894 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:45:30.874843  200894 notify.go:221] Checking for updates...
	I1126 20:45:30.881358  200894 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:45:30.884692  200894 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:45:30.887980  200894 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:45:30.891133  200894 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:45:30.894163  200894 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:45:30.897614  200894 config.go:182] Loaded profile config "cert-expiration-164741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:45:30.897721  200894 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:45:30.930819  200894 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:45:30.930960  200894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:45:30.998656  200894 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:45:30.988410924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:45:30.998760  200894 docker.go:319] overlay module found
	I1126 20:45:31.002039  200894 out.go:179] * Using the docker driver based on user configuration
	I1126 20:45:31.005023  200894 start.go:309] selected driver: docker
	I1126 20:45:31.005044  200894 start.go:927] validating driver "docker" against <nil>
	I1126 20:45:31.005074  200894 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:45:31.005795  200894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:45:31.064049  200894 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:45:31.054934727 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:45:31.064195  200894 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 20:45:31.064519  200894 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:45:31.067709  200894 out.go:179] * Using Docker driver with root privileges
	I1126 20:45:31.070582  200894 cni.go:84] Creating CNI manager for ""
	I1126 20:45:31.070657  200894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:45:31.070671  200894 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 20:45:31.070758  200894 start.go:353] cluster config:
	{Name:old-k8s-version-264537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:45:31.074122  200894 out.go:179] * Starting "old-k8s-version-264537" primary control-plane node in "old-k8s-version-264537" cluster
	I1126 20:45:31.077012  200894 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:45:31.079918  200894 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:45:31.082766  200894 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 20:45:31.082821  200894 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1126 20:45:31.082832  200894 cache.go:65] Caching tarball of preloaded images
	I1126 20:45:31.082884  200894 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:45:31.082923  200894 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:45:31.082934  200894 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1126 20:45:31.083055  200894 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/config.json ...
	I1126 20:45:31.083073  200894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/config.json: {Name:mkbc274cb9e50bbac68505e3ab579c164ca6d91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:45:31.103484  200894 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:45:31.103509  200894 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:45:31.103530  200894 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:45:31.103562  200894 start.go:360] acquireMachinesLock for old-k8s-version-264537: {Name:mk29e49468e71e0dea1a65078cbaf777af655706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:45:31.103679  200894 start.go:364] duration metric: took 95.865µs to acquireMachinesLock for "old-k8s-version-264537"
	I1126 20:45:31.103710  200894 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-264537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264537 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:45:31.103798  200894 start.go:125] createHost starting for "" (driver="docker")
	I1126 20:45:31.107321  200894 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:45:31.107574  200894 start.go:159] libmachine.API.Create for "old-k8s-version-264537" (driver="docker")
	I1126 20:45:31.107608  200894 client.go:173] LocalClient.Create starting
	I1126 20:45:31.107685  200894 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem
	I1126 20:45:31.107733  200894 main.go:143] libmachine: Decoding PEM data...
	I1126 20:45:31.107754  200894 main.go:143] libmachine: Parsing certificate...
	I1126 20:45:31.107807  200894 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem
	I1126 20:45:31.107830  200894 main.go:143] libmachine: Decoding PEM data...
	I1126 20:45:31.107849  200894 main.go:143] libmachine: Parsing certificate...
	I1126 20:45:31.108224  200894 cli_runner.go:164] Run: docker network inspect old-k8s-version-264537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:45:31.125577  200894 cli_runner.go:211] docker network inspect old-k8s-version-264537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:45:31.125688  200894 network_create.go:284] running [docker network inspect old-k8s-version-264537] to gather additional debugging logs...
	I1126 20:45:31.125709  200894 cli_runner.go:164] Run: docker network inspect old-k8s-version-264537
	W1126 20:45:31.152110  200894 cli_runner.go:211] docker network inspect old-k8s-version-264537 returned with exit code 1
	I1126 20:45:31.152153  200894 network_create.go:287] error running [docker network inspect old-k8s-version-264537]: docker network inspect old-k8s-version-264537: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-264537 not found
	I1126 20:45:31.152212  200894 network_create.go:289] output of [docker network inspect old-k8s-version-264537]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-264537 not found
	
	** /stderr **
	I1126 20:45:31.152455  200894 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:45:31.169512  200894 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20cb65a83ad5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:26:47:2b:2e:03} reservation:<nil>}
	I1126 20:45:31.169834  200894 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-16105a7ff776 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:75:f6:9d:ad:ac} reservation:<nil>}
	I1126 20:45:31.170264  200894 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f1c69ea9dfa3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b7:bf:8a:44:80} reservation:<nil>}
	I1126 20:45:31.170674  200894 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a5e80}
	I1126 20:45:31.170693  200894 network_create.go:124] attempt to create docker network old-k8s-version-264537 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1126 20:45:31.170747  200894 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-264537 old-k8s-version-264537
	I1126 20:45:31.223953  200894 network_create.go:108] docker network old-k8s-version-264537 192.168.76.0/24 created
	I1126 20:45:31.223980  200894 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-264537" container
	I1126 20:45:31.224049  200894 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:45:31.241383  200894 cli_runner.go:164] Run: docker volume create old-k8s-version-264537 --label name.minikube.sigs.k8s.io=old-k8s-version-264537 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:45:31.257624  200894 oci.go:103] Successfully created a docker volume old-k8s-version-264537
	I1126 20:45:31.257782  200894 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-264537-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-264537 --entrypoint /usr/bin/test -v old-k8s-version-264537:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:45:31.811861  200894 oci.go:107] Successfully prepared a docker volume old-k8s-version-264537
	I1126 20:45:31.811925  200894 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 20:45:31.811935  200894 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:45:31.812016  200894 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-264537:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 20:45:36.499945  200894 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-264537:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.68788254s)
	I1126 20:45:36.499975  200894 kic.go:203] duration metric: took 4.688036553s to extract preloaded images to volume ...
	W1126 20:45:36.500111  200894 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1126 20:45:36.500219  200894 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:45:36.549995  200894 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-264537 --name old-k8s-version-264537 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-264537 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-264537 --network old-k8s-version-264537 --ip 192.168.76.2 --volume old-k8s-version-264537:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:45:36.885201  200894 cli_runner.go:164] Run: docker container inspect old-k8s-version-264537 --format={{.State.Running}}
	I1126 20:45:36.906043  200894 cli_runner.go:164] Run: docker container inspect old-k8s-version-264537 --format={{.State.Status}}
	I1126 20:45:36.929778  200894 cli_runner.go:164] Run: docker exec old-k8s-version-264537 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:45:36.984088  200894 oci.go:144] the created container "old-k8s-version-264537" has a running status.
	I1126 20:45:36.984115  200894 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/old-k8s-version-264537/id_rsa...
	I1126 20:45:37.269733  200894 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-2326/.minikube/machines/old-k8s-version-264537/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:45:37.299561  200894 cli_runner.go:164] Run: docker container inspect old-k8s-version-264537 --format={{.State.Status}}
	I1126 20:45:37.326041  200894 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:45:37.326066  200894 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-264537 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:45:37.384103  200894 cli_runner.go:164] Run: docker container inspect old-k8s-version-264537 --format={{.State.Status}}
	I1126 20:45:37.401400  200894 machine.go:94] provisionDockerMachine start ...
	I1126 20:45:37.401484  200894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264537
	I1126 20:45:37.420795  200894 main.go:143] libmachine: Using SSH client type: native
	I1126 20:45:37.421193  200894 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1126 20:45:37.421205  200894 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:45:37.422084  200894 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1126 20:45:40.574150  200894 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-264537
	
	I1126 20:45:40.574173  200894 ubuntu.go:182] provisioning hostname "old-k8s-version-264537"
	I1126 20:45:40.574244  200894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264537
	I1126 20:45:40.594402  200894 main.go:143] libmachine: Using SSH client type: native
	I1126 20:45:40.594770  200894 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1126 20:45:40.594789  200894 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-264537 && echo "old-k8s-version-264537" | sudo tee /etc/hostname
	I1126 20:45:40.750920  200894 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-264537
	
	I1126 20:45:40.751044  200894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264537
	I1126 20:45:40.768008  200894 main.go:143] libmachine: Using SSH client type: native
	I1126 20:45:40.768322  200894 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1126 20:45:40.768345  200894 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-264537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-264537/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-264537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:45:40.918015  200894 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:45:40.918084  200894 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:45:40.918133  200894 ubuntu.go:190] setting up certificates
	I1126 20:45:40.918174  200894 provision.go:84] configureAuth start
	I1126 20:45:40.918257  200894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-264537
	I1126 20:45:40.935044  200894 provision.go:143] copyHostCerts
	I1126 20:45:40.935113  200894 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:45:40.935121  200894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:45:40.935199  200894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:45:40.935293  200894 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:45:40.935303  200894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:45:40.935335  200894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:45:40.935424  200894 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:45:40.935430  200894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:45:40.935453  200894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:45:40.935505  200894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-264537 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-264537]
	I1126 20:45:41.237622  200894 provision.go:177] copyRemoteCerts
	I1126 20:45:41.237712  200894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:45:41.237780  200894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264537
	I1126 20:45:41.266333  200894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/old-k8s-version-264537/id_rsa Username:docker}
	I1126 20:45:41.369499  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:45:41.386861  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1126 20:45:41.405133  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:45:41.423277  200894 provision.go:87] duration metric: took 505.060914ms to configureAuth
	I1126 20:45:41.423349  200894 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:45:41.423580  200894 config.go:182] Loaded profile config "old-k8s-version-264537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:45:41.423717  200894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264537
	I1126 20:45:41.441290  200894 main.go:143] libmachine: Using SSH client type: native
	I1126 20:45:41.441718  200894 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33043 <nil> <nil>}
	I1126 20:45:41.441742  200894 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:45:41.727173  200894 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:45:41.727195  200894 machine.go:97] duration metric: took 4.325775776s to provisionDockerMachine
	I1126 20:45:41.727207  200894 client.go:176] duration metric: took 10.619588115s to LocalClient.Create
	I1126 20:45:41.727226  200894 start.go:167] duration metric: took 10.619653408s to libmachine.API.Create "old-k8s-version-264537"
	I1126 20:45:41.727237  200894 start.go:293] postStartSetup for "old-k8s-version-264537" (driver="docker")
	I1126 20:45:41.727247  200894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:45:41.727322  200894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:45:41.727365  200894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264537
	I1126 20:45:41.746149  200894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/old-k8s-version-264537/id_rsa Username:docker}
	I1126 20:45:41.849821  200894 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:45:41.853099  200894 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:45:41.853124  200894 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:45:41.853134  200894 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:45:41.853187  200894 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:45:41.853264  200894 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:45:41.853362  200894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:45:41.860793  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:45:41.878860  200894 start.go:296] duration metric: took 151.609456ms for postStartSetup
	I1126 20:45:41.879230  200894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-264537
	I1126 20:45:41.895472  200894 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/config.json ...
	I1126 20:45:41.895749  200894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:45:41.895802  200894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264537
	I1126 20:45:41.912622  200894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/old-k8s-version-264537/id_rsa Username:docker}
	I1126 20:45:42.022661  200894 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:45:42.029621  200894 start.go:128] duration metric: took 10.925807454s to createHost
	I1126 20:45:42.029653  200894 start.go:83] releasing machines lock for "old-k8s-version-264537", held for 10.925960122s
	I1126 20:45:42.029733  200894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-264537
	I1126 20:45:42.048995  200894 ssh_runner.go:195] Run: cat /version.json
	I1126 20:45:42.049029  200894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:45:42.049051  200894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264537
	I1126 20:45:42.049087  200894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264537
	I1126 20:45:42.069810  200894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/old-k8s-version-264537/id_rsa Username:docker}
	I1126 20:45:42.079580  200894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/old-k8s-version-264537/id_rsa Username:docker}
	I1126 20:45:42.316057  200894 ssh_runner.go:195] Run: systemctl --version
	I1126 20:45:42.324240  200894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:45:42.367270  200894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:45:42.372780  200894 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:45:42.372877  200894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:45:42.405173  200894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1126 20:45:42.405209  200894 start.go:496] detecting cgroup driver to use...
	I1126 20:45:42.405241  200894 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:45:42.405297  200894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:45:42.423144  200894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:45:42.436159  200894 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:45:42.436228  200894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:45:42.454666  200894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:45:42.473382  200894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:45:42.586349  200894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:45:42.714044  200894 docker.go:234] disabling docker service ...
	I1126 20:45:42.714115  200894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:45:42.738295  200894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:45:42.753017  200894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:45:42.867967  200894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:45:42.989313  200894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:45:43.008740  200894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:45:43.025630  200894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1126 20:45:43.025733  200894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:45:43.035238  200894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:45:43.035362  200894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:45:43.044779  200894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:45:43.053829  200894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:45:43.062833  200894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:45:43.071109  200894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:45:43.080082  200894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:45:43.093795  200894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:45:43.103132  200894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:45:43.110983  200894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:45:43.118376  200894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:45:43.233442  200894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:45:43.413280  200894 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:45:43.413352  200894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:45:43.417262  200894 start.go:564] Will wait 60s for crictl version
	I1126 20:45:43.417366  200894 ssh_runner.go:195] Run: which crictl
	I1126 20:45:43.420719  200894 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:45:43.444365  200894 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:45:43.444448  200894 ssh_runner.go:195] Run: crio --version
	I1126 20:45:43.475450  200894 ssh_runner.go:195] Run: crio --version
	I1126 20:45:43.507562  200894 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1126 20:45:43.510433  200894 cli_runner.go:164] Run: docker network inspect old-k8s-version-264537 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:45:43.527282  200894 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1126 20:45:43.531054  200894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:45:43.541292  200894 kubeadm.go:884] updating cluster {Name:old-k8s-version-264537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264537 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:45:43.541422  200894 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 20:45:43.541478  200894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:45:43.577467  200894 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:45:43.577494  200894 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:45:43.577553  200894 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:45:43.605705  200894 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:45:43.605729  200894 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:45:43.605736  200894 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1126 20:45:43.605853  200894 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-264537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:45:43.605962  200894 ssh_runner.go:195] Run: crio config
	I1126 20:45:43.677571  200894 cni.go:84] Creating CNI manager for ""
	I1126 20:45:43.677640  200894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:45:43.677672  200894 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:45:43.677719  200894 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-264537 NodeName:old-k8s-version-264537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:45:43.677943  200894 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-264537"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:45:43.678053  200894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1126 20:45:43.686038  200894 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:45:43.686113  200894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:45:43.693674  200894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1126 20:45:43.706621  200894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:45:43.719802  200894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1126 20:45:43.733863  200894 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:45:43.737506  200894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:45:43.747156  200894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:45:43.872770  200894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:45:43.888559  200894 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537 for IP: 192.168.76.2
	I1126 20:45:43.888635  200894 certs.go:195] generating shared ca certs ...
	I1126 20:45:43.888665  200894 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:45:43.888897  200894 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:45:43.888986  200894 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:45:43.889021  200894 certs.go:257] generating profile certs ...
	I1126 20:45:43.889101  200894 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.key
	I1126 20:45:43.889139  200894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt with IP's: []
	I1126 20:45:44.523848  200894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt ...
	I1126 20:45:44.523892  200894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: {Name:mk97bf32bc6af5d85fe7ca3666de67545aa1f780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:45:44.524134  200894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.key ...
	I1126 20:45:44.524151  200894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.key: {Name:mk920892a55f10473114abf1812534bfd6b06209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:45:44.524250  200894 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/apiserver.key.a12d1c1e
	I1126 20:45:44.524270  200894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/apiserver.crt.a12d1c1e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1126 20:45:44.942325  200894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/apiserver.crt.a12d1c1e ...
	I1126 20:45:44.942359  200894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/apiserver.crt.a12d1c1e: {Name:mk0230414365c6885b81dba0d90187710de0b4f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:45:44.942544  200894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/apiserver.key.a12d1c1e ...
	I1126 20:45:44.942560  200894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/apiserver.key.a12d1c1e: {Name:mk3bbc21fb34c0d23045f872290951dae67be74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:45:44.942640  200894 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/apiserver.crt.a12d1c1e -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/apiserver.crt
	I1126 20:45:44.942719  200894 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/apiserver.key.a12d1c1e -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/apiserver.key
	I1126 20:45:44.942778  200894 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/proxy-client.key
	I1126 20:45:44.942796  200894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/proxy-client.crt with IP's: []
	I1126 20:45:44.988755  200894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/proxy-client.crt ...
	I1126 20:45:44.988783  200894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/proxy-client.crt: {Name:mkd818b95b945a6a16f46824f364143324a590bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:45:44.988956  200894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/proxy-client.key ...
	I1126 20:45:44.988971  200894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/proxy-client.key: {Name:mk2f649822a400badcccc59fcca0c1a4f0b4016a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:45:44.989151  200894 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:45:44.989198  200894 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:45:44.989206  200894 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:45:44.989232  200894 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:45:44.989262  200894 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:45:44.989292  200894 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:45:44.989342  200894 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:45:44.989986  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:45:45.009370  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:45:45.058886  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:45:45.104321  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:45:45.138712  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1126 20:45:45.169857  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:45:45.224428  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:45:45.268472  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:45:45.297483  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:45:45.322045  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:45:45.343960  200894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:45:45.366134  200894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:45:45.383540  200894 ssh_runner.go:195] Run: openssl version
	I1126 20:45:45.391803  200894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:45:45.401552  200894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:45:45.405143  200894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:45:45.405205  200894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:45:45.446089  200894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:45:45.456295  200894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:45:45.465380  200894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:45:45.469807  200894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:45:45.469885  200894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:45:45.512490  200894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:45:45.520994  200894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:45:45.529595  200894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:45:45.535175  200894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:45:45.535273  200894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:45:45.576377  200894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:45:45.584796  200894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:45:45.588239  200894 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:45:45.588303  200894 kubeadm.go:401] StartCluster: {Name:old-k8s-version-264537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-264537 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:45:45.588380  200894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:45:45.588443  200894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:45:45.617750  200894 cri.go:89] found id: ""
	I1126 20:45:45.617842  200894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:45:45.625629  200894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:45:45.633622  200894 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:45:45.633754  200894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:45:45.641875  200894 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:45:45.641894  200894 kubeadm.go:158] found existing configuration files:
	
	I1126 20:45:45.642039  200894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:45:45.649639  200894 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:45:45.649719  200894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:45:45.657062  200894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:45:45.664998  200894 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:45:45.665120  200894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:45:45.672314  200894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:45:45.679666  200894 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:45:45.679733  200894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:45:45.686845  200894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:45:45.694521  200894 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:45:45.694588  200894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:45:45.702178  200894 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:45:45.751640  200894 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1126 20:45:45.751812  200894 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:45:45.790024  200894 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:45:45.790101  200894 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1126 20:45:45.790184  200894 kubeadm.go:319] OS: Linux
	I1126 20:45:45.790271  200894 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:45:45.790378  200894 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1126 20:45:45.790449  200894 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:45:45.790525  200894 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:45:45.790602  200894 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:45:45.790684  200894 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:45:45.790763  200894 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:45:45.790835  200894 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:45:45.790904  200894 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1126 20:45:45.872137  200894 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:45:45.872267  200894 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:45:45.872381  200894 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1126 20:45:46.022343  200894 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:45:46.025414  200894 out.go:252]   - Generating certificates and keys ...
	I1126 20:45:46.025521  200894 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:45:46.025615  200894 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:45:46.168781  200894 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:45:46.348854  200894 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:45:47.264385  200894 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:45:47.856939  200894 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:45:48.707233  200894 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:45:48.707565  200894 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-264537] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1126 20:45:48.833314  200894 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:45:48.833635  200894 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-264537] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1126 20:45:49.097845  200894 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:45:49.526916  200894 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:45:50.049485  200894 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:45:50.049903  200894 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:45:51.456051  200894 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:45:51.969868  200894 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:45:52.574403  200894 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:45:53.717306  200894 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:45:53.718163  200894 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:45:53.720785  200894 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 20:45:53.724310  200894 out.go:252]   - Booting up control plane ...
	I1126 20:45:53.724427  200894 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:45:53.724511  200894 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:45:53.725298  200894 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:45:53.746928  200894 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:45:53.748228  200894 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:45:53.748283  200894 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:45:53.886060  200894 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1126 20:46:01.393848  200894 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.511654 seconds
	I1126 20:46:01.394005  200894 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:46:01.414785  200894 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:46:01.951467  200894 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:46:01.951682  200894 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-264537 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:46:02.465624  200894 kubeadm.go:319] [bootstrap-token] Using token: loczve.u74aqwizy4j8vglf
	I1126 20:46:02.468529  200894 out.go:252]   - Configuring RBAC rules ...
	I1126 20:46:02.468648  200894 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:46:02.474644  200894 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:46:02.483870  200894 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:46:02.488336  200894 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:46:02.495125  200894 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:46:02.499568  200894 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:46:02.517606  200894 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:46:02.844304  200894 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:46:02.905617  200894 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:46:02.906925  200894 kubeadm.go:319] 
	I1126 20:46:02.907006  200894 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:46:02.907017  200894 kubeadm.go:319] 
	I1126 20:46:02.907091  200894 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:46:02.907099  200894 kubeadm.go:319] 
	I1126 20:46:02.907123  200894 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:46:02.907181  200894 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:46:02.907232  200894 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:46:02.907238  200894 kubeadm.go:319] 
	I1126 20:46:02.907289  200894 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:46:02.907295  200894 kubeadm.go:319] 
	I1126 20:46:02.907339  200894 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:46:02.907347  200894 kubeadm.go:319] 
	I1126 20:46:02.907396  200894 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:46:02.907472  200894 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:46:02.907543  200894 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:46:02.907560  200894 kubeadm.go:319] 
	I1126 20:46:02.907642  200894 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:46:02.907718  200894 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:46:02.907727  200894 kubeadm.go:319] 
	I1126 20:46:02.907806  200894 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token loczve.u74aqwizy4j8vglf \
	I1126 20:46:02.907907  200894 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a \
	I1126 20:46:02.907931  200894 kubeadm.go:319] 	--control-plane 
	I1126 20:46:02.907939  200894 kubeadm.go:319] 
	I1126 20:46:02.908019  200894 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:46:02.908029  200894 kubeadm.go:319] 
	I1126 20:46:02.908106  200894 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token loczve.u74aqwizy4j8vglf \
	I1126 20:46:02.908206  200894 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a 
	I1126 20:46:02.912028  200894 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1126 20:46:02.912165  200894 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 20:46:02.912242  200894 cni.go:84] Creating CNI manager for ""
	I1126 20:46:02.912253  200894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:46:02.917429  200894 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1126 20:46:02.920410  200894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 20:46:02.930397  200894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1126 20:46:02.930429  200894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 20:46:02.966920  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:46:03.951405  200894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:46:03.951531  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:03.951610  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-264537 minikube.k8s.io/updated_at=2025_11_26T20_46_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=old-k8s-version-264537 minikube.k8s.io/primary=true
	I1126 20:46:04.203988  200894 ops.go:34] apiserver oom_adj: -16
	I1126 20:46:04.204094  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:04.705132  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:05.204201  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:05.704690  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:06.205130  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:06.704239  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:07.204203  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:07.707635  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:08.205132  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:08.704851  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:09.204402  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:09.704689  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:10.205177  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:10.705163  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:11.205108  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:11.705032  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:12.204293  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:12.704468  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:13.204665  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:13.704753  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:14.205090  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:14.704450  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:15.204229  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:15.704770  200894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:46:15.806341  200894 kubeadm.go:1114] duration metric: took 11.854853162s to wait for elevateKubeSystemPrivileges
	I1126 20:46:15.806367  200894 kubeadm.go:403] duration metric: took 30.218069439s to StartCluster
	I1126 20:46:15.806384  200894 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:46:15.806452  200894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:46:15.807439  200894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:46:15.807653  200894 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:46:15.807776  200894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:46:15.808012  200894 config.go:182] Loaded profile config "old-k8s-version-264537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:46:15.808058  200894 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:46:15.808114  200894 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-264537"
	I1126 20:46:15.808135  200894 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-264537"
	I1126 20:46:15.808163  200894 host.go:66] Checking if "old-k8s-version-264537" exists ...
	I1126 20:46:15.808481  200894 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-264537"
	I1126 20:46:15.808502  200894 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-264537"
	I1126 20:46:15.808787  200894 cli_runner.go:164] Run: docker container inspect old-k8s-version-264537 --format={{.State.Status}}
	I1126 20:46:15.809103  200894 cli_runner.go:164] Run: docker container inspect old-k8s-version-264537 --format={{.State.Status}}
	I1126 20:46:15.812080  200894 out.go:179] * Verifying Kubernetes components...
	I1126 20:46:15.815473  200894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:46:15.841150  200894 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-264537"
	I1126 20:46:15.841187  200894 host.go:66] Checking if "old-k8s-version-264537" exists ...
	I1126 20:46:15.841601  200894 cli_runner.go:164] Run: docker container inspect old-k8s-version-264537 --format={{.State.Status}}
	I1126 20:46:15.861152  200894 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:46:15.863982  200894 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:46:15.864011  200894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:46:15.864079  200894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264537
	I1126 20:46:15.883312  200894 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:46:15.883339  200894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:46:15.883405  200894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264537
	I1126 20:46:15.916564  200894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/old-k8s-version-264537/id_rsa Username:docker}
	I1126 20:46:15.931876  200894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/old-k8s-version-264537/id_rsa Username:docker}
	I1126 20:46:16.163897  200894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:46:16.164025  200894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:46:16.286992  200894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:46:16.292143  200894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:46:17.176048  200894 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.011979032s)
	I1126 20:46:17.177036  200894 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-264537" to be "Ready" ...
	I1126 20:46:17.177407  200894 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.013481101s)
	I1126 20:46:17.177466  200894 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1126 20:46:17.478956  200894 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18671517s)
	I1126 20:46:17.484073  200894 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1126 20:46:17.487103  200894 addons.go:530] duration metric: took 1.679021585s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1126 20:46:17.683716  200894 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-264537" context rescaled to 1 replicas
	W1126 20:46:19.180118  200894 node_ready.go:57] node "old-k8s-version-264537" has "Ready":"False" status (will retry)
	W1126 20:46:21.180383  200894 node_ready.go:57] node "old-k8s-version-264537" has "Ready":"False" status (will retry)
	W1126 20:46:23.680876  200894 node_ready.go:57] node "old-k8s-version-264537" has "Ready":"False" status (will retry)
	W1126 20:46:26.180976  200894 node_ready.go:57] node "old-k8s-version-264537" has "Ready":"False" status (will retry)
	W1126 20:46:28.680508  200894 node_ready.go:57] node "old-k8s-version-264537" has "Ready":"False" status (will retry)
	I1126 20:46:29.680061  200894 node_ready.go:49] node "old-k8s-version-264537" is "Ready"
	I1126 20:46:29.680090  200894 node_ready.go:38] duration metric: took 12.503006992s for node "old-k8s-version-264537" to be "Ready" ...
	I1126 20:46:29.680105  200894 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:46:29.680164  200894 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:46:29.692509  200894 api_server.go:72] duration metric: took 13.884817516s to wait for apiserver process to appear ...
	I1126 20:46:29.692533  200894 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:46:29.692552  200894 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:46:29.701822  200894 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1126 20:46:29.703279  200894 api_server.go:141] control plane version: v1.28.0
	I1126 20:46:29.703306  200894 api_server.go:131] duration metric: took 10.765919ms to wait for apiserver health ...
	I1126 20:46:29.703316  200894 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:46:29.707166  200894 system_pods.go:59] 8 kube-system pods found
	I1126 20:46:29.707203  200894 system_pods.go:61] "coredns-5dd5756b68-w99t5" [3478d6d7-c19e-4d95-a1bb-250fd6b7231a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:46:29.707210  200894 system_pods.go:61] "etcd-old-k8s-version-264537" [5c71d4e2-c122-46ec-a14c-1645b84c90ee] Running
	I1126 20:46:29.707216  200894 system_pods.go:61] "kindnet-6k58p" [99b68428-ee78-42b4-a4a2-ff90303a8675] Running
	I1126 20:46:29.707220  200894 system_pods.go:61] "kube-apiserver-old-k8s-version-264537" [bf8326df-ce2a-407d-b7b2-6baf07e4cbac] Running
	I1126 20:46:29.707225  200894 system_pods.go:61] "kube-controller-manager-old-k8s-version-264537" [22a92a53-937e-478d-ad05-03d33d2fff3d] Running
	I1126 20:46:29.707229  200894 system_pods.go:61] "kube-proxy-9rv9c" [07845951-73d4-47b6-bee6-ea94e0ee8f8b] Running
	I1126 20:46:29.707233  200894 system_pods.go:61] "kube-scheduler-old-k8s-version-264537" [0660ef3e-bbc3-43b3-a692-d011ec953162] Running
	I1126 20:46:29.707241  200894 system_pods.go:61] "storage-provisioner" [a225364a-610f-4a1a-8675-a654eebbd3cc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:46:29.707257  200894 system_pods.go:74] duration metric: took 3.923772ms to wait for pod list to return data ...
	I1126 20:46:29.707270  200894 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:46:29.709651  200894 default_sa.go:45] found service account: "default"
	I1126 20:46:29.709675  200894 default_sa.go:55] duration metric: took 2.39872ms for default service account to be created ...
	I1126 20:46:29.709685  200894 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:46:29.713458  200894 system_pods.go:86] 8 kube-system pods found
	I1126 20:46:29.713490  200894 system_pods.go:89] "coredns-5dd5756b68-w99t5" [3478d6d7-c19e-4d95-a1bb-250fd6b7231a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:46:29.713506  200894 system_pods.go:89] "etcd-old-k8s-version-264537" [5c71d4e2-c122-46ec-a14c-1645b84c90ee] Running
	I1126 20:46:29.713520  200894 system_pods.go:89] "kindnet-6k58p" [99b68428-ee78-42b4-a4a2-ff90303a8675] Running
	I1126 20:46:29.713525  200894 system_pods.go:89] "kube-apiserver-old-k8s-version-264537" [bf8326df-ce2a-407d-b7b2-6baf07e4cbac] Running
	I1126 20:46:29.713535  200894 system_pods.go:89] "kube-controller-manager-old-k8s-version-264537" [22a92a53-937e-478d-ad05-03d33d2fff3d] Running
	I1126 20:46:29.713539  200894 system_pods.go:89] "kube-proxy-9rv9c" [07845951-73d4-47b6-bee6-ea94e0ee8f8b] Running
	I1126 20:46:29.713543  200894 system_pods.go:89] "kube-scheduler-old-k8s-version-264537" [0660ef3e-bbc3-43b3-a692-d011ec953162] Running
	I1126 20:46:29.713557  200894 system_pods.go:89] "storage-provisioner" [a225364a-610f-4a1a-8675-a654eebbd3cc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:46:29.713587  200894 retry.go:31] will retry after 305.673546ms: missing components: kube-dns
	I1126 20:46:30.028413  200894 system_pods.go:86] 8 kube-system pods found
	I1126 20:46:30.028463  200894 system_pods.go:89] "coredns-5dd5756b68-w99t5" [3478d6d7-c19e-4d95-a1bb-250fd6b7231a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:46:30.028473  200894 system_pods.go:89] "etcd-old-k8s-version-264537" [5c71d4e2-c122-46ec-a14c-1645b84c90ee] Running
	I1126 20:46:30.028481  200894 system_pods.go:89] "kindnet-6k58p" [99b68428-ee78-42b4-a4a2-ff90303a8675] Running
	I1126 20:46:30.028486  200894 system_pods.go:89] "kube-apiserver-old-k8s-version-264537" [bf8326df-ce2a-407d-b7b2-6baf07e4cbac] Running
	I1126 20:46:30.028491  200894 system_pods.go:89] "kube-controller-manager-old-k8s-version-264537" [22a92a53-937e-478d-ad05-03d33d2fff3d] Running
	I1126 20:46:30.028495  200894 system_pods.go:89] "kube-proxy-9rv9c" [07845951-73d4-47b6-bee6-ea94e0ee8f8b] Running
	I1126 20:46:30.028499  200894 system_pods.go:89] "kube-scheduler-old-k8s-version-264537" [0660ef3e-bbc3-43b3-a692-d011ec953162] Running
	I1126 20:46:30.028505  200894 system_pods.go:89] "storage-provisioner" [a225364a-610f-4a1a-8675-a654eebbd3cc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:46:30.028522  200894 retry.go:31] will retry after 299.884269ms: missing components: kube-dns
	I1126 20:46:30.333209  200894 system_pods.go:86] 8 kube-system pods found
	I1126 20:46:30.333239  200894 system_pods.go:89] "coredns-5dd5756b68-w99t5" [3478d6d7-c19e-4d95-a1bb-250fd6b7231a] Running
	I1126 20:46:30.333249  200894 system_pods.go:89] "etcd-old-k8s-version-264537" [5c71d4e2-c122-46ec-a14c-1645b84c90ee] Running
	I1126 20:46:30.333253  200894 system_pods.go:89] "kindnet-6k58p" [99b68428-ee78-42b4-a4a2-ff90303a8675] Running
	I1126 20:46:30.333306  200894 system_pods.go:89] "kube-apiserver-old-k8s-version-264537" [bf8326df-ce2a-407d-b7b2-6baf07e4cbac] Running
	I1126 20:46:30.333318  200894 system_pods.go:89] "kube-controller-manager-old-k8s-version-264537" [22a92a53-937e-478d-ad05-03d33d2fff3d] Running
	I1126 20:46:30.333323  200894 system_pods.go:89] "kube-proxy-9rv9c" [07845951-73d4-47b6-bee6-ea94e0ee8f8b] Running
	I1126 20:46:30.333327  200894 system_pods.go:89] "kube-scheduler-old-k8s-version-264537" [0660ef3e-bbc3-43b3-a692-d011ec953162] Running
	I1126 20:46:30.333331  200894 system_pods.go:89] "storage-provisioner" [a225364a-610f-4a1a-8675-a654eebbd3cc] Running
	I1126 20:46:30.333345  200894 system_pods.go:126] duration metric: took 623.654869ms to wait for k8s-apps to be running ...
	I1126 20:46:30.333367  200894 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:46:30.333430  200894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:46:30.351906  200894 system_svc.go:56] duration metric: took 18.529995ms WaitForService to wait for kubelet
	I1126 20:46:30.351936  200894 kubeadm.go:587] duration metric: took 14.544249835s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:46:30.351953  200894 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:46:30.356893  200894 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:46:30.356922  200894 node_conditions.go:123] node cpu capacity is 2
	I1126 20:46:30.356934  200894 node_conditions.go:105] duration metric: took 4.975986ms to run NodePressure ...
	I1126 20:46:30.356946  200894 start.go:242] waiting for startup goroutines ...
	I1126 20:46:30.356954  200894 start.go:247] waiting for cluster config update ...
	I1126 20:46:30.356968  200894 start.go:256] writing updated cluster config ...
	I1126 20:46:30.357241  200894 ssh_runner.go:195] Run: rm -f paused
	I1126 20:46:30.364609  200894 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:46:30.369424  200894 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-w99t5" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:46:30.374641  200894 pod_ready.go:94] pod "coredns-5dd5756b68-w99t5" is "Ready"
	I1126 20:46:30.374668  200894 pod_ready.go:86] duration metric: took 5.220483ms for pod "coredns-5dd5756b68-w99t5" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:46:30.377429  200894 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-264537" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:46:30.383196  200894 pod_ready.go:94] pod "etcd-old-k8s-version-264537" is "Ready"
	I1126 20:46:30.383223  200894 pod_ready.go:86] duration metric: took 5.770906ms for pod "etcd-old-k8s-version-264537" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:46:30.386339  200894 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-264537" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:46:30.391410  200894 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-264537" is "Ready"
	I1126 20:46:30.391439  200894 pod_ready.go:86] duration metric: took 5.070703ms for pod "kube-apiserver-old-k8s-version-264537" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:46:30.394630  200894 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-264537" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:46:30.769019  200894 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-264537" is "Ready"
	I1126 20:46:30.769045  200894 pod_ready.go:86] duration metric: took 374.389676ms for pod "kube-controller-manager-old-k8s-version-264537" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:46:30.970180  200894 pod_ready.go:83] waiting for pod "kube-proxy-9rv9c" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:46:31.368670  200894 pod_ready.go:94] pod "kube-proxy-9rv9c" is "Ready"
	I1126 20:46:31.368698  200894 pod_ready.go:86] duration metric: took 398.492338ms for pod "kube-proxy-9rv9c" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:46:31.571918  200894 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-264537" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:46:31.969120  200894 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-264537" is "Ready"
	I1126 20:46:31.969158  200894 pod_ready.go:86] duration metric: took 397.206335ms for pod "kube-scheduler-old-k8s-version-264537" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:46:31.969198  200894 pod_ready.go:40] duration metric: took 1.604541956s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:46:32.024699  200894 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1126 20:46:32.027634  200894 out.go:203] 
	W1126 20:46:32.030671  200894 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1126 20:46:32.033588  200894 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1126 20:46:32.036478  200894 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-264537" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 20:46:29 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:29.869664826Z" level=info msg="Created container e3f0f493a71907f5540fae8d672fd1c915f18dd2a3b4ec1857c8b1a6cf5eed85: kube-system/coredns-5dd5756b68-w99t5/coredns" id=b8472d70-b4e9-4601-bd7f-f7c6cb1011b0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:46:29 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:29.870849959Z" level=info msg="Starting container: e3f0f493a71907f5540fae8d672fd1c915f18dd2a3b4ec1857c8b1a6cf5eed85" id=1f59e6b0-0290-49fe-b7bf-547ad35db6e8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:46:29 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:29.875318779Z" level=info msg="Started container" PID=1950 containerID=e3f0f493a71907f5540fae8d672fd1c915f18dd2a3b4ec1857c8b1a6cf5eed85 description=kube-system/coredns-5dd5756b68-w99t5/coredns id=1f59e6b0-0290-49fe-b7bf-547ad35db6e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=05302bfc787896037862d1926e65bcbf5985e1363ba6045f2bfd8bd4d8f51741
	Nov 26 20:46:32 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:32.552178013Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7074cd15-e98f-4148-b0a2-5f0d093fbda7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:46:32 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:32.552244653Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:46:32 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:32.557213739Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9a64571ea8449bce67e68f929c1e879ae5f6496fb57009e8d98931c3a8f308bc UID:08c368e4-7be3-4bc3-bde6-222d7bd7f0c1 NetNS:/var/run/netns/02026bf7-1de8-42bb-9933-83d96160b6ed Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012ddc8}] Aliases:map[]}"
	Nov 26 20:46:32 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:32.557405249Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 26 20:46:32 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:32.575083385Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9a64571ea8449bce67e68f929c1e879ae5f6496fb57009e8d98931c3a8f308bc UID:08c368e4-7be3-4bc3-bde6-222d7bd7f0c1 NetNS:/var/run/netns/02026bf7-1de8-42bb-9933-83d96160b6ed Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012ddc8}] Aliases:map[]}"
	Nov 26 20:46:32 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:32.575408633Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 26 20:46:32 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:32.580522084Z" level=info msg="Ran pod sandbox 9a64571ea8449bce67e68f929c1e879ae5f6496fb57009e8d98931c3a8f308bc with infra container: default/busybox/POD" id=7074cd15-e98f-4148-b0a2-5f0d093fbda7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:46:32 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:32.581666923Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cc11c624-79cc-4a2c-9d54-c06473d8c835 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:46:32 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:32.581915243Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cc11c624-79cc-4a2c-9d54-c06473d8c835 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:46:32 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:32.58212413Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cc11c624-79cc-4a2c-9d54-c06473d8c835 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:46:32 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:32.582834367Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a76be6f9-dc92-4ef7-8c2d-c6671c6956bc name=/runtime.v1.ImageService/PullImage
	Nov 26 20:46:32 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:32.585199382Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 26 20:46:34 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:34.616604819Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a76be6f9-dc92-4ef7-8c2d-c6671c6956bc name=/runtime.v1.ImageService/PullImage
	Nov 26 20:46:34 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:34.620175997Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2ba533bf-6d23-41a4-bf21-195885a95386 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:46:34 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:34.623015712Z" level=info msg="Creating container: default/busybox/busybox" id=ed0a6fab-a054-47a5-86a9-9c97bc5e08f9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:46:34 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:34.623136856Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:46:34 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:34.627878382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:46:34 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:34.62853833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:46:34 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:34.646862205Z" level=info msg="Created container cf0c6b0cc18260d82e058c60ddb10421c97ed8b3ffb8fa02bb9a8242dc3c40e7: default/busybox/busybox" id=ed0a6fab-a054-47a5-86a9-9c97bc5e08f9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:46:34 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:34.647535659Z" level=info msg="Starting container: cf0c6b0cc18260d82e058c60ddb10421c97ed8b3ffb8fa02bb9a8242dc3c40e7" id=0c27f526-07ef-4507-9972-583b7f741e85 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:46:34 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:34.649978489Z" level=info msg="Started container" PID=2009 containerID=cf0c6b0cc18260d82e058c60ddb10421c97ed8b3ffb8fa02bb9a8242dc3c40e7 description=default/busybox/busybox id=0c27f526-07ef-4507-9972-583b7f741e85 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a64571ea8449bce67e68f929c1e879ae5f6496fb57009e8d98931c3a8f308bc
	Nov 26 20:46:41 old-k8s-version-264537 crio[838]: time="2025-11-26T20:46:41.421486096Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	cf0c6b0cc1826       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   9a64571ea8449       busybox                                          default
	e3f0f493a7190       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   05302bfc78789       coredns-5dd5756b68-w99t5                         kube-system
	4ed52bc56548d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   dbf0d8a76f961       storage-provisioner                              kube-system
	40a2fa188e7ee       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   8507c1d49f030       kindnet-6k58p                                    kube-system
	32e18d22d959b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   d6978b8ecbb27       kube-proxy-9rv9c                                 kube-system
	ab2893a12f28e       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      47 seconds ago      Running             kube-scheduler            0                   62b26ed93c969       kube-scheduler-old-k8s-version-264537            kube-system
	ba1d7ae09cbc7       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      47 seconds ago      Running             kube-apiserver            0                   6f159ed0d3771       kube-apiserver-old-k8s-version-264537            kube-system
	6477445f609d0       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      47 seconds ago      Running             kube-controller-manager   0                   2cb7e85c59bbc       kube-controller-manager-old-k8s-version-264537   kube-system
	b8502f679a616       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      47 seconds ago      Running             etcd                      0                   6686a7302c788       etcd-old-k8s-version-264537                      kube-system
	
	
	==> coredns [e3f0f493a71907f5540fae8d672fd1c915f18dd2a3b4ec1857c8b1a6cf5eed85] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49462 - 718 "HINFO IN 5885312651904793957.4454971327785749273. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014367946s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-264537
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-264537
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=old-k8s-version-264537
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_46_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:45:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-264537
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:46:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:46:33 +0000   Wed, 26 Nov 2025 20:45:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:46:33 +0000   Wed, 26 Nov 2025 20:45:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:46:33 +0000   Wed, 26 Nov 2025 20:45:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:46:33 +0000   Wed, 26 Nov 2025 20:46:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-264537
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                8b1866d5-0ca9-4303-8791-a0bc9b937ae1
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-w99t5                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-264537                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-6k58p                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-264537             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-264537    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-9rv9c                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-264537             100m (5%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-264537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-264537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-264537 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node old-k8s-version-264537 event: Registered Node old-k8s-version-264537 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-264537 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov26 20:14] overlayfs: idmapped layers are currently not supported
	[Nov26 20:16] overlayfs: idmapped layers are currently not supported
	[Nov26 20:21] overlayfs: idmapped layers are currently not supported
	[ +33.563196] overlayfs: idmapped layers are currently not supported
	[Nov26 20:23] overlayfs: idmapped layers are currently not supported
	[Nov26 20:24] overlayfs: idmapped layers are currently not supported
	[Nov26 20:25] overlayfs: idmapped layers are currently not supported
	[Nov26 20:27] overlayfs: idmapped layers are currently not supported
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b8502f679a6165afa6694a02497a91a41899ba1782d08d9b3b228c83f9906f1d] <==
	{"level":"info","ts":"2025-11-26T20:45:55.731121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-26T20:45:55.739736Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-26T20:45:55.728778Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-26T20:45:55.739814Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-26T20:45:55.740009Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-26T20:45:55.740052Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-26T20:45:55.740063Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-26T20:45:56.289964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-26T20:45:56.290083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-26T20:45:56.290125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-26T20:45:56.290174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-26T20:45:56.290205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-26T20:45:56.290241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-26T20:45:56.290273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-26T20:45:56.293224Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:45:56.296051Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-264537 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-26T20:45:56.296307Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:45:56.296823Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:45:56.296952Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:45:56.297969Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:45:56.298704Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-26T20:45:56.298775Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-26T20:45:56.307084Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-26T20:45:56.298786Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:45:56.308655Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:46:43 up  1:28,  0 user,  load average: 2.12, 2.69, 2.20
	Linux old-k8s-version-264537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [40a2fa188e7ee74d3b1c4d9ceac4ea6b70e2da97743da432cb4a80ff42f5ce9e] <==
	I1126 20:46:19.139617       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:46:19.139859       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:46:19.139971       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:46:19.139989       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:46:19.140002       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:46:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:46:19.333389       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:46:19.428652       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:46:19.428747       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:46:19.428908       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:46:19.629014       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:46:19.629041       1 metrics.go:72] Registering metrics
	I1126 20:46:19.629112       1 controller.go:711] "Syncing nftables rules"
	I1126 20:46:29.338341       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:46:29.338402       1 main.go:301] handling current node
	I1126 20:46:39.336211       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:46:39.336251       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ba1d7ae09cbc78cb34cc90babf0ad2d9199fcb5ff806373a4b307d175dbc1b5d] <==
	I1126 20:45:59.784665       1 autoregister_controller.go:141] Starting autoregister controller
	I1126 20:45:59.784673       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:45:59.784680       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:45:59.785880       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1126 20:45:59.785914       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1126 20:45:59.786143       1 controller.go:624] quota admission added evaluator for: namespaces
	I1126 20:45:59.787084       1 shared_informer.go:318] Caches are synced for configmaps
	I1126 20:45:59.787142       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:45:59.791879       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1126 20:45:59.853776       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:46:00.596534       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1126 20:46:00.604096       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1126 20:46:00.604807       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:46:01.259348       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:46:01.329426       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:46:01.427817       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1126 20:46:01.439982       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1126 20:46:01.442217       1 controller.go:624] quota admission added evaluator for: endpoints
	I1126 20:46:01.448034       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:46:01.677491       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1126 20:46:02.811892       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1126 20:46:02.840148       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1126 20:46:02.857863       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1126 20:46:14.677201       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1126 20:46:15.526854       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [6477445f609d00e1fa3cb7523daf58f81a619dd0e807eb7e7518cdc67cd2134c] <==
	I1126 20:46:14.781662       1 shared_informer.go:318] Caches are synced for resource quota
	I1126 20:46:14.798661       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1126 20:46:14.809540       1 shared_informer.go:318] Caches are synced for resource quota
	I1126 20:46:15.218309       1 shared_informer.go:318] Caches are synced for garbage collector
	I1126 20:46:15.218414       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1126 20:46:15.244116       1 shared_informer.go:318] Caches are synced for garbage collector
	I1126 20:46:15.540412       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6k58p"
	I1126 20:46:15.551214       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9rv9c"
	I1126 20:46:15.602274       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-str96"
	I1126 20:46:15.620125       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-w99t5"
	I1126 20:46:15.634135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="945.877876ms"
	I1126 20:46:15.651674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.33196ms"
	I1126 20:46:15.652047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="167.641µs"
	I1126 20:46:15.678955       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1126 20:46:17.249499       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1126 20:46:17.302370       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-str96"
	I1126 20:46:17.317567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.856809ms"
	I1126 20:46:17.345511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.898688ms"
	I1126 20:46:17.345627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.59µs"
	I1126 20:46:29.497757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="219.455µs"
	I1126 20:46:29.522073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="192.265µs"
	I1126 20:46:29.692012       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1126 20:46:30.098887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.455µs"
	I1126 20:46:30.164197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.792543ms"
	I1126 20:46:30.165643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="167.346µs"
	
	
	==> kube-proxy [32e18d22d959b944c71da1af990d7372612ae7e004d5f7058077d955604f1984] <==
	I1126 20:46:16.177005       1 server_others.go:69] "Using iptables proxy"
	I1126 20:46:16.205762       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1126 20:46:16.239382       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:46:16.241292       1 server_others.go:152] "Using iptables Proxier"
	I1126 20:46:16.241374       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1126 20:46:16.241407       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1126 20:46:16.241473       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1126 20:46:16.241714       1 server.go:846] "Version info" version="v1.28.0"
	I1126 20:46:16.241972       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:46:16.242778       1 config.go:188] "Starting service config controller"
	I1126 20:46:16.242860       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1126 20:46:16.242905       1 config.go:97] "Starting endpoint slice config controller"
	I1126 20:46:16.242932       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1126 20:46:16.244773       1 config.go:315] "Starting node config controller"
	I1126 20:46:16.246077       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1126 20:46:16.343263       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1126 20:46:16.343322       1 shared_informer.go:318] Caches are synced for service config
	I1126 20:46:16.346170       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ab2893a12f28ef852556013bbeffccfa391065776e6cd85886724621762d9db1] <==
	W1126 20:46:00.381840       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1126 20:46:00.381894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1126 20:46:00.382056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1126 20:46:00.382106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1126 20:46:00.382262       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1126 20:46:00.382307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1126 20:46:00.382813       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1126 20:46:00.382851       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1126 20:46:00.383193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1126 20:46:00.383229       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1126 20:46:00.383265       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1126 20:46:00.383282       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1126 20:46:00.383313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1126 20:46:00.383332       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1126 20:46:00.383406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1126 20:46:00.383459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1126 20:46:00.383629       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1126 20:46:00.383676       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1126 20:46:00.383747       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1126 20:46:00.383767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1126 20:46:00.383803       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1126 20:46:00.383842       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1126 20:46:00.383820       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1126 20:46:00.383915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1126 20:46:01.454186       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 26 20:46:15 old-k8s-version-264537 kubelet[1395]: I1126 20:46:15.562612    1395 topology_manager.go:215] "Topology Admit Handler" podUID="07845951-73d4-47b6-bee6-ea94e0ee8f8b" podNamespace="kube-system" podName="kube-proxy-9rv9c"
	Nov 26 20:46:15 old-k8s-version-264537 kubelet[1395]: I1126 20:46:15.615807    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99b68428-ee78-42b4-a4a2-ff90303a8675-lib-modules\") pod \"kindnet-6k58p\" (UID: \"99b68428-ee78-42b4-a4a2-ff90303a8675\") " pod="kube-system/kindnet-6k58p"
	Nov 26 20:46:15 old-k8s-version-264537 kubelet[1395]: I1126 20:46:15.615870    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h5d4\" (UniqueName: \"kubernetes.io/projected/99b68428-ee78-42b4-a4a2-ff90303a8675-kube-api-access-9h5d4\") pod \"kindnet-6k58p\" (UID: \"99b68428-ee78-42b4-a4a2-ff90303a8675\") " pod="kube-system/kindnet-6k58p"
	Nov 26 20:46:15 old-k8s-version-264537 kubelet[1395]: I1126 20:46:15.615914    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99b68428-ee78-42b4-a4a2-ff90303a8675-xtables-lock\") pod \"kindnet-6k58p\" (UID: \"99b68428-ee78-42b4-a4a2-ff90303a8675\") " pod="kube-system/kindnet-6k58p"
	Nov 26 20:46:15 old-k8s-version-264537 kubelet[1395]: I1126 20:46:15.615939    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07845951-73d4-47b6-bee6-ea94e0ee8f8b-lib-modules\") pod \"kube-proxy-9rv9c\" (UID: \"07845951-73d4-47b6-bee6-ea94e0ee8f8b\") " pod="kube-system/kube-proxy-9rv9c"
	Nov 26 20:46:15 old-k8s-version-264537 kubelet[1395]: I1126 20:46:15.615976    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07845951-73d4-47b6-bee6-ea94e0ee8f8b-xtables-lock\") pod \"kube-proxy-9rv9c\" (UID: \"07845951-73d4-47b6-bee6-ea94e0ee8f8b\") " pod="kube-system/kube-proxy-9rv9c"
	Nov 26 20:46:15 old-k8s-version-264537 kubelet[1395]: I1126 20:46:15.616002    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qd8r\" (UniqueName: \"kubernetes.io/projected/07845951-73d4-47b6-bee6-ea94e0ee8f8b-kube-api-access-7qd8r\") pod \"kube-proxy-9rv9c\" (UID: \"07845951-73d4-47b6-bee6-ea94e0ee8f8b\") " pod="kube-system/kube-proxy-9rv9c"
	Nov 26 20:46:15 old-k8s-version-264537 kubelet[1395]: I1126 20:46:15.616026    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/99b68428-ee78-42b4-a4a2-ff90303a8675-cni-cfg\") pod \"kindnet-6k58p\" (UID: \"99b68428-ee78-42b4-a4a2-ff90303a8675\") " pod="kube-system/kindnet-6k58p"
	Nov 26 20:46:15 old-k8s-version-264537 kubelet[1395]: I1126 20:46:15.616059    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/07845951-73d4-47b6-bee6-ea94e0ee8f8b-kube-proxy\") pod \"kube-proxy-9rv9c\" (UID: \"07845951-73d4-47b6-bee6-ea94e0ee8f8b\") " pod="kube-system/kube-proxy-9rv9c"
	Nov 26 20:46:15 old-k8s-version-264537 kubelet[1395]: W1126 20:46:15.970013    1395 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/crio-d6978b8ecbb2705a0de3416c25fb5859e2d875d3e4b6cb2e3b6696aae67f64e0 WatchSource:0}: Error finding container d6978b8ecbb2705a0de3416c25fb5859e2d875d3e4b6cb2e3b6696aae67f64e0: Status 404 returned error can't find the container with id d6978b8ecbb2705a0de3416c25fb5859e2d875d3e4b6cb2e3b6696aae67f64e0
	Nov 26 20:46:19 old-k8s-version-264537 kubelet[1395]: I1126 20:46:19.072238    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-6k58p" podStartSLOduration=1.020632042 podCreationTimestamp="2025-11-26 20:46:15 +0000 UTC" firstStartedPulling="2025-11-26 20:46:15.943602653 +0000 UTC m=+13.169063766" lastFinishedPulling="2025-11-26 20:46:18.995164171 +0000 UTC m=+16.220625292" observedRunningTime="2025-11-26 20:46:19.072151666 +0000 UTC m=+16.297612787" watchObservedRunningTime="2025-11-26 20:46:19.072193568 +0000 UTC m=+16.297654689"
	Nov 26 20:46:19 old-k8s-version-264537 kubelet[1395]: I1126 20:46:19.072365    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9rv9c" podStartSLOduration=4.072348427 podCreationTimestamp="2025-11-26 20:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:46:17.071206691 +0000 UTC m=+14.296667812" watchObservedRunningTime="2025-11-26 20:46:19.072348427 +0000 UTC m=+16.297809556"
	Nov 26 20:46:29 old-k8s-version-264537 kubelet[1395]: I1126 20:46:29.456477    1395 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 26 20:46:29 old-k8s-version-264537 kubelet[1395]: I1126 20:46:29.490141    1395 topology_manager.go:215] "Topology Admit Handler" podUID="a225364a-610f-4a1a-8675-a654eebbd3cc" podNamespace="kube-system" podName="storage-provisioner"
	Nov 26 20:46:29 old-k8s-version-264537 kubelet[1395]: I1126 20:46:29.494746    1395 topology_manager.go:215] "Topology Admit Handler" podUID="3478d6d7-c19e-4d95-a1bb-250fd6b7231a" podNamespace="kube-system" podName="coredns-5dd5756b68-w99t5"
	Nov 26 20:46:29 old-k8s-version-264537 kubelet[1395]: I1126 20:46:29.611703    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a225364a-610f-4a1a-8675-a654eebbd3cc-tmp\") pod \"storage-provisioner\" (UID: \"a225364a-610f-4a1a-8675-a654eebbd3cc\") " pod="kube-system/storage-provisioner"
	Nov 26 20:46:29 old-k8s-version-264537 kubelet[1395]: I1126 20:46:29.611769    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cq45\" (UniqueName: \"kubernetes.io/projected/a225364a-610f-4a1a-8675-a654eebbd3cc-kube-api-access-5cq45\") pod \"storage-provisioner\" (UID: \"a225364a-610f-4a1a-8675-a654eebbd3cc\") " pod="kube-system/storage-provisioner"
	Nov 26 20:46:29 old-k8s-version-264537 kubelet[1395]: I1126 20:46:29.611798    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpfnc\" (UniqueName: \"kubernetes.io/projected/3478d6d7-c19e-4d95-a1bb-250fd6b7231a-kube-api-access-kpfnc\") pod \"coredns-5dd5756b68-w99t5\" (UID: \"3478d6d7-c19e-4d95-a1bb-250fd6b7231a\") " pod="kube-system/coredns-5dd5756b68-w99t5"
	Nov 26 20:46:29 old-k8s-version-264537 kubelet[1395]: I1126 20:46:29.611824    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3478d6d7-c19e-4d95-a1bb-250fd6b7231a-config-volume\") pod \"coredns-5dd5756b68-w99t5\" (UID: \"3478d6d7-c19e-4d95-a1bb-250fd6b7231a\") " pod="kube-system/coredns-5dd5756b68-w99t5"
	Nov 26 20:46:29 old-k8s-version-264537 kubelet[1395]: W1126 20:46:29.826997    1395 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/crio-05302bfc787896037862d1926e65bcbf5985e1363ba6045f2bfd8bd4d8f51741 WatchSource:0}: Error finding container 05302bfc787896037862d1926e65bcbf5985e1363ba6045f2bfd8bd4d8f51741: Status 404 returned error can't find the container with id 05302bfc787896037862d1926e65bcbf5985e1363ba6045f2bfd8bd4d8f51741
	Nov 26 20:46:30 old-k8s-version-264537 kubelet[1395]: I1126 20:46:30.123368    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-w99t5" podStartSLOduration=15.123215571 podCreationTimestamp="2025-11-26 20:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:46:30.100803394 +0000 UTC m=+27.326264572" watchObservedRunningTime="2025-11-26 20:46:30.123215571 +0000 UTC m=+27.348676692"
	Nov 26 20:46:30 old-k8s-version-264537 kubelet[1395]: I1126 20:46:30.150457    1395 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.15039443 podCreationTimestamp="2025-11-26 20:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:46:30.126237073 +0000 UTC m=+27.351698194" watchObservedRunningTime="2025-11-26 20:46:30.15039443 +0000 UTC m=+27.375855551"
	Nov 26 20:46:32 old-k8s-version-264537 kubelet[1395]: I1126 20:46:32.250615    1395 topology_manager.go:215] "Topology Admit Handler" podUID="08c368e4-7be3-4bc3-bde6-222d7bd7f0c1" podNamespace="default" podName="busybox"
	Nov 26 20:46:32 old-k8s-version-264537 kubelet[1395]: I1126 20:46:32.429325    1395 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zkbr\" (UniqueName: \"kubernetes.io/projected/08c368e4-7be3-4bc3-bde6-222d7bd7f0c1-kube-api-access-8zkbr\") pod \"busybox\" (UID: \"08c368e4-7be3-4bc3-bde6-222d7bd7f0c1\") " pod="default/busybox"
	Nov 26 20:46:32 old-k8s-version-264537 kubelet[1395]: W1126 20:46:32.577212    1395 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/crio-9a64571ea8449bce67e68f929c1e879ae5f6496fb57009e8d98931c3a8f308bc WatchSource:0}: Error finding container 9a64571ea8449bce67e68f929c1e879ae5f6496fb57009e8d98931c3a8f308bc: Status 404 returned error can't find the container with id 9a64571ea8449bce67e68f929c1e879ae5f6496fb57009e8d98931c3a8f308bc
	
	
	==> storage-provisioner [4ed52bc56548d5db7d5838b40a9c14718bb4725eeef97584a836156ad9165370] <==
	I1126 20:46:29.887179       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:46:29.906523       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:46:29.906580       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1126 20:46:29.914995       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:46:29.917267       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24d009d6-7643-48bd-8682-d8a75e344fd3", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-264537_8e77e9cb-5912-455a-841b-7ea2e0d29cde became leader
	I1126 20:46:29.917461       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-264537_8e77e9cb-5912-455a-841b-7ea2e0d29cde!
	I1126 20:46:30.017785       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-264537_8e77e9cb-5912-455a-841b-7ea2e0d29cde!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-264537 -n old-k8s-version-264537
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-264537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-264537 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-264537 --alsologtostderr -v=1: exit status 80 (2.132930737s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-264537 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:48:00.901472  207161 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:48:00.904161  207161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:48:00.904198  207161 out.go:374] Setting ErrFile to fd 2...
	I1126 20:48:00.904224  207161 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:48:00.904545  207161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:48:00.904851  207161 out.go:368] Setting JSON to false
	I1126 20:48:00.904912  207161 mustload.go:66] Loading cluster: old-k8s-version-264537
	I1126 20:48:00.905383  207161 config.go:182] Loaded profile config "old-k8s-version-264537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1126 20:48:00.905980  207161 cli_runner.go:164] Run: docker container inspect old-k8s-version-264537 --format={{.State.Status}}
	I1126 20:48:00.925338  207161 host.go:66] Checking if "old-k8s-version-264537" exists ...
	I1126 20:48:00.925643  207161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:48:01.019492  207161 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-26 20:48:01.008982735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:48:01.020251  207161 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-264537 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1126 20:48:01.023526  207161 out.go:179] * Pausing node old-k8s-version-264537 ... 
	I1126 20:48:01.027212  207161 host.go:66] Checking if "old-k8s-version-264537" exists ...
	I1126 20:48:01.027708  207161 ssh_runner.go:195] Run: systemctl --version
	I1126 20:48:01.027839  207161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-264537
	I1126 20:48:01.056875  207161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/old-k8s-version-264537/id_rsa Username:docker}
	I1126 20:48:01.183391  207161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:48:01.214319  207161 pause.go:52] kubelet running: true
	I1126 20:48:01.214406  207161 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:48:01.554424  207161 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:48:01.554504  207161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:48:01.670856  207161 cri.go:89] found id: "1a32d3b9e1883a1260ba649c81bec9f5cb7ef22f8f4590f95375ae969df1afa3"
	I1126 20:48:01.670918  207161 cri.go:89] found id: "388e7dcee4c17a2c45f2d8d832a2db44f06e5528a2a10d8c3df08d344c25a223"
	I1126 20:48:01.670937  207161 cri.go:89] found id: "bbe5e0cd6e0ff7a9722e7413ce8f89636a2abf001545e870532eebd22a93e60e"
	I1126 20:48:01.670956  207161 cri.go:89] found id: "572c2c85ed7d71acf7cd0c767201ce638ca7e6d276cc20883a0484e7f244d60c"
	I1126 20:48:01.670975  207161 cri.go:89] found id: "cebff254eb17a577d788fffed5cf8c4fbba80094b1b83ce0d7aa765376039071"
	I1126 20:48:01.671004  207161 cri.go:89] found id: "861cdf83e26ecaeb9c2086ba6ee2b898b58cb27f499652485c8d87139834385c"
	I1126 20:48:01.671028  207161 cri.go:89] found id: "f3a99a92a571f35772e62af2f45fb5484878af13d4e0ff35e1338a2d989b68d4"
	I1126 20:48:01.671047  207161 cri.go:89] found id: "400d8fb8f7491a2ea343f20eb65e87a3111f2b78d29d6524dbd6edd1594351e6"
	I1126 20:48:01.671067  207161 cri.go:89] found id: "aa3ee34dbdfd346cd8a9d14474e49263180adb69e68a3838f6741dca0ea9cdab"
	I1126 20:48:01.671093  207161 cri.go:89] found id: "dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06"
	I1126 20:48:01.671125  207161 cri.go:89] found id: "6b03c1e9591cf653833abe711757adaa1bdfd1190816fd296d1f0a76357eae13"
	I1126 20:48:01.671142  207161 cri.go:89] found id: ""
	I1126 20:48:01.671218  207161 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:48:01.684324  207161 retry.go:31] will retry after 260.360199ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:48:01Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:48:01.945850  207161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:48:01.958968  207161 pause.go:52] kubelet running: false
	I1126 20:48:01.959057  207161 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:48:02.124343  207161 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:48:02.124457  207161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:48:02.193857  207161 cri.go:89] found id: "1a32d3b9e1883a1260ba649c81bec9f5cb7ef22f8f4590f95375ae969df1afa3"
	I1126 20:48:02.193881  207161 cri.go:89] found id: "388e7dcee4c17a2c45f2d8d832a2db44f06e5528a2a10d8c3df08d344c25a223"
	I1126 20:48:02.193886  207161 cri.go:89] found id: "bbe5e0cd6e0ff7a9722e7413ce8f89636a2abf001545e870532eebd22a93e60e"
	I1126 20:48:02.193890  207161 cri.go:89] found id: "572c2c85ed7d71acf7cd0c767201ce638ca7e6d276cc20883a0484e7f244d60c"
	I1126 20:48:02.193894  207161 cri.go:89] found id: "cebff254eb17a577d788fffed5cf8c4fbba80094b1b83ce0d7aa765376039071"
	I1126 20:48:02.193898  207161 cri.go:89] found id: "861cdf83e26ecaeb9c2086ba6ee2b898b58cb27f499652485c8d87139834385c"
	I1126 20:48:02.193910  207161 cri.go:89] found id: "f3a99a92a571f35772e62af2f45fb5484878af13d4e0ff35e1338a2d989b68d4"
	I1126 20:48:02.193914  207161 cri.go:89] found id: "400d8fb8f7491a2ea343f20eb65e87a3111f2b78d29d6524dbd6edd1594351e6"
	I1126 20:48:02.193941  207161 cri.go:89] found id: "aa3ee34dbdfd346cd8a9d14474e49263180adb69e68a3838f6741dca0ea9cdab"
	I1126 20:48:02.193949  207161 cri.go:89] found id: "dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06"
	I1126 20:48:02.193953  207161 cri.go:89] found id: "6b03c1e9591cf653833abe711757adaa1bdfd1190816fd296d1f0a76357eae13"
	I1126 20:48:02.193957  207161 cri.go:89] found id: ""
	I1126 20:48:02.194019  207161 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:48:02.205582  207161 retry.go:31] will retry after 469.723024ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:48:02Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:48:02.676265  207161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:48:02.689663  207161 pause.go:52] kubelet running: false
	I1126 20:48:02.689740  207161 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:48:02.854682  207161 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:48:02.854800  207161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:48:02.921947  207161 cri.go:89] found id: "1a32d3b9e1883a1260ba649c81bec9f5cb7ef22f8f4590f95375ae969df1afa3"
	I1126 20:48:02.921971  207161 cri.go:89] found id: "388e7dcee4c17a2c45f2d8d832a2db44f06e5528a2a10d8c3df08d344c25a223"
	I1126 20:48:02.921976  207161 cri.go:89] found id: "bbe5e0cd6e0ff7a9722e7413ce8f89636a2abf001545e870532eebd22a93e60e"
	I1126 20:48:02.921982  207161 cri.go:89] found id: "572c2c85ed7d71acf7cd0c767201ce638ca7e6d276cc20883a0484e7f244d60c"
	I1126 20:48:02.921985  207161 cri.go:89] found id: "cebff254eb17a577d788fffed5cf8c4fbba80094b1b83ce0d7aa765376039071"
	I1126 20:48:02.921988  207161 cri.go:89] found id: "861cdf83e26ecaeb9c2086ba6ee2b898b58cb27f499652485c8d87139834385c"
	I1126 20:48:02.921991  207161 cri.go:89] found id: "f3a99a92a571f35772e62af2f45fb5484878af13d4e0ff35e1338a2d989b68d4"
	I1126 20:48:02.921996  207161 cri.go:89] found id: "400d8fb8f7491a2ea343f20eb65e87a3111f2b78d29d6524dbd6edd1594351e6"
	I1126 20:48:02.921999  207161 cri.go:89] found id: "aa3ee34dbdfd346cd8a9d14474e49263180adb69e68a3838f6741dca0ea9cdab"
	I1126 20:48:02.922007  207161 cri.go:89] found id: "dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06"
	I1126 20:48:02.922011  207161 cri.go:89] found id: "6b03c1e9591cf653833abe711757adaa1bdfd1190816fd296d1f0a76357eae13"
	I1126 20:48:02.922024  207161 cri.go:89] found id: ""
	I1126 20:48:02.922071  207161 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:48:02.937288  207161 out.go:203] 
	W1126 20:48:02.940240  207161 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:48:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:48:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 20:48:02.940278  207161 out.go:285] * 
	* 
	W1126 20:48:02.946173  207161 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 20:48:02.949070  207161 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-264537 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-264537
helpers_test.go:243: (dbg) docker inspect old-k8s-version-264537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747",
	        "Created": "2025-11-26T20:45:36.56908992Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204624,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:46:56.490187296Z",
	            "FinishedAt": "2025-11-26T20:46:55.68738797Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/hostname",
	        "HostsPath": "/var/lib/docker/containers/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/hosts",
	        "LogPath": "/var/lib/docker/containers/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747-json.log",
	        "Name": "/old-k8s-version-264537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-264537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-264537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747",
	                "LowerDir": "/var/lib/docker/overlay2/7051b00bcce0d8072bca16b9cd942f07c121d04f16461ee338a38ce225cd81cb-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7051b00bcce0d8072bca16b9cd942f07c121d04f16461ee338a38ce225cd81cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7051b00bcce0d8072bca16b9cd942f07c121d04f16461ee338a38ce225cd81cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7051b00bcce0d8072bca16b9cd942f07c121d04f16461ee338a38ce225cd81cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-264537",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-264537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-264537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-264537",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-264537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "76de3de1debeb9f2d25049e20f8a9d1998bd09952db79fc7a96437ae230caf2d",
	            "SandboxKey": "/var/run/docker/netns/76de3de1debe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-264537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:4a:ee:a7:c5:85",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a0df607f6641f4214aa99b2f7e135610ec93c7d857cfae2423703322c6f61751",
	                    "EndpointID": "7a9c4cb75d2ee3539c03aa27baa3dcfae6bf9f3f3f4627aec02cd076b9f3ae12",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-264537",
	                        "a5e16735df4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264537 -n old-k8s-version-264537
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264537 -n old-k8s-version-264537: exit status 2 (350.956381ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-264537 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-264537 logs -n 25: (1.284393998s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-235709 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo containerd config dump                                                                                                                                                                                                  │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo crio config                                                                                                                                                                                                             │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ delete  │ -p cilium-235709                                                                                                                                                                                                                              │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p force-systemd-env-274518 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-274518  │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ ssh     │ force-systemd-flag-622960 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-622960 │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ delete  │ -p force-systemd-flag-622960                                                                                                                                                                                                                  │ force-systemd-flag-622960 │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-164741    │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ delete  │ -p force-systemd-env-274518                                                                                                                                                                                                                   │ force-systemd-env-274518  │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p cert-options-207115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:45 UTC │
	│ ssh     │ cert-options-207115 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ ssh     │ -p cert-options-207115 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ delete  │ -p cert-options-207115                                                                                                                                                                                                                        │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-264537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │                     │
	│ stop    │ -p old-k8s-version-264537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-264537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:47 UTC │
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164741    │ jenkins │ v1.37.0 │ 26 Nov 25 20:47 UTC │                     │
	│ image   │ old-k8s-version-264537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ pause   │ -p old-k8s-version-264537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:47:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:47:52.681527  206678 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:47:52.681630  206678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:47:52.681634  206678 out.go:374] Setting ErrFile to fd 2...
	I1126 20:47:52.681638  206678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:47:52.681871  206678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:47:52.682240  206678 out.go:368] Setting JSON to false
	I1126 20:47:52.683138  206678 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5403,"bootTime":1764184670,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:47:52.683191  206678 start.go:143] virtualization:  
	I1126 20:47:52.686531  206678 out.go:179] * [cert-expiration-164741] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:47:52.689551  206678 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:47:52.689647  206678 notify.go:221] Checking for updates...
	I1126 20:47:52.695178  206678 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:47:52.698145  206678 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:47:52.701615  206678 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:47:52.704400  206678 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:47:52.707364  206678 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:47:52.710558  206678 config.go:182] Loaded profile config "cert-expiration-164741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:47:52.711091  206678 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:47:52.739231  206678 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:47:52.739336  206678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:47:52.809006  206678 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-26 20:47:52.799833852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:47:52.809102  206678 docker.go:319] overlay module found
	I1126 20:47:52.812129  206678 out.go:179] * Using the docker driver based on existing profile
	I1126 20:47:52.815054  206678 start.go:309] selected driver: docker
	I1126 20:47:52.815063  206678 start.go:927] validating driver "docker" against &{Name:cert-expiration-164741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-164741 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:47:52.815148  206678 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:47:52.815869  206678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:47:52.874018  206678 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-26 20:47:52.865081337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:47:52.874311  206678 cni.go:84] Creating CNI manager for ""
	I1126 20:47:52.874369  206678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:47:52.874406  206678 start.go:353] cluster config:
	{Name:cert-expiration-164741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-164741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1126 20:47:52.877440  206678 out.go:179] * Starting "cert-expiration-164741" primary control-plane node in "cert-expiration-164741" cluster
	I1126 20:47:52.880382  206678 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:47:52.883465  206678 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:47:52.886368  206678 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:47:52.886404  206678 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:47:52.886413  206678 cache.go:65] Caching tarball of preloaded images
	I1126 20:47:52.886428  206678 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:47:52.886498  206678 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:47:52.886507  206678 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:47:52.886615  206678 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/cert-expiration-164741/config.json ...
	I1126 20:47:52.909112  206678 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:47:52.909123  206678 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:47:52.909136  206678 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:47:52.909166  206678 start.go:360] acquireMachinesLock for cert-expiration-164741: {Name:mka3ecf1e428c26500994e5e1766791d0c225fa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:47:52.909229  206678 start.go:364] duration metric: took 46.054µs to acquireMachinesLock for "cert-expiration-164741"
	I1126 20:47:52.909248  206678 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:47:52.909252  206678 fix.go:54] fixHost starting: 
	I1126 20:47:52.909532  206678 cli_runner.go:164] Run: docker container inspect cert-expiration-164741 --format={{.State.Status}}
	I1126 20:47:52.926133  206678 fix.go:112] recreateIfNeeded on cert-expiration-164741: state=Running err=<nil>
	W1126 20:47:52.926169  206678 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:47:52.929482  206678 out.go:252] * Updating the running docker "cert-expiration-164741" container ...
	I1126 20:47:52.929505  206678 machine.go:94] provisionDockerMachine start ...
	I1126 20:47:52.929576  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:52.947051  206678 main.go:143] libmachine: Using SSH client type: native
	I1126 20:47:52.947374  206678 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1126 20:47:52.947380  206678 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:47:53.106443  206678 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-164741
	
	I1126 20:47:53.106456  206678 ubuntu.go:182] provisioning hostname "cert-expiration-164741"
	I1126 20:47:53.106515  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:53.127275  206678 main.go:143] libmachine: Using SSH client type: native
	I1126 20:47:53.127588  206678 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1126 20:47:53.127596  206678 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-164741 && echo "cert-expiration-164741" | sudo tee /etc/hostname
	I1126 20:47:53.292322  206678 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-164741
	
	I1126 20:47:53.292402  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:53.310734  206678 main.go:143] libmachine: Using SSH client type: native
	I1126 20:47:53.311029  206678 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1126 20:47:53.311043  206678 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-164741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-164741/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-164741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:47:53.462183  206678 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:47:53.462197  206678 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:47:53.462214  206678 ubuntu.go:190] setting up certificates
	I1126 20:47:53.462223  206678 provision.go:84] configureAuth start
	I1126 20:47:53.462279  206678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-164741
	I1126 20:47:53.480400  206678 provision.go:143] copyHostCerts
	I1126 20:47:53.480458  206678 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:47:53.480471  206678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:47:53.480558  206678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:47:53.480674  206678 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:47:53.480678  206678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:47:53.480704  206678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:47:53.480750  206678 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:47:53.480754  206678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:47:53.480776  206678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:47:53.480823  206678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-164741 san=[127.0.0.1 192.168.85.2 cert-expiration-164741 localhost minikube]
	I1126 20:47:53.720228  206678 provision.go:177] copyRemoteCerts
	I1126 20:47:53.720279  206678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:47:53.720314  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:53.741714  206678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/cert-expiration-164741/id_rsa Username:docker}
	I1126 20:47:53.850637  206678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:47:53.870529  206678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1126 20:47:53.889021  206678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:47:53.909027  206678 provision.go:87] duration metric: took 446.781591ms to configureAuth
	I1126 20:47:53.909044  206678 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:47:53.909245  206678 config.go:182] Loaded profile config "cert-expiration-164741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:47:53.909371  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:53.926979  206678 main.go:143] libmachine: Using SSH client type: native
	I1126 20:47:53.927277  206678 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1126 20:47:53.927289  206678 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:47:59.345916  206678 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:47:59.345998  206678 machine.go:97] duration metric: took 6.416487237s to provisionDockerMachine
	I1126 20:47:59.346008  206678 start.go:293] postStartSetup for "cert-expiration-164741" (driver="docker")
	I1126 20:47:59.346018  206678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:47:59.346086  206678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:47:59.346124  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:59.364160  206678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/cert-expiration-164741/id_rsa Username:docker}
	I1126 20:47:59.469574  206678 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:47:59.472894  206678 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:47:59.472911  206678 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:47:59.472920  206678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:47:59.472970  206678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:47:59.473043  206678 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:47:59.473134  206678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:47:59.480380  206678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:47:59.496687  206678 start.go:296] duration metric: took 150.665893ms for postStartSetup
	I1126 20:47:59.496752  206678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:47:59.496787  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:59.514678  206678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/cert-expiration-164741/id_rsa Username:docker}
	I1126 20:47:59.615449  206678 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:47:59.620931  206678 fix.go:56] duration metric: took 6.71167234s for fixHost
	I1126 20:47:59.620947  206678 start.go:83] releasing machines lock for "cert-expiration-164741", held for 6.711710796s
	I1126 20:47:59.621013  206678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-164741
	I1126 20:47:59.638689  206678 ssh_runner.go:195] Run: cat /version.json
	I1126 20:47:59.638732  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:59.638996  206678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:47:59.639055  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:59.656859  206678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/cert-expiration-164741/id_rsa Username:docker}
	I1126 20:47:59.670875  206678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/cert-expiration-164741/id_rsa Username:docker}
	I1126 20:47:59.765624  206678 ssh_runner.go:195] Run: systemctl --version
	I1126 20:47:59.878771  206678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:47:59.939378  206678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:47:59.943930  206678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:47:59.943997  206678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:47:59.953076  206678 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:47:59.953098  206678 start.go:496] detecting cgroup driver to use...
	I1126 20:47:59.953128  206678 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:47:59.953171  206678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:47:59.968250  206678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:47:59.980935  206678 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:47:59.980987  206678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:47:59.996467  206678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:48:00.015081  206678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:48:00.572570  206678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:48:00.757899  206678 docker.go:234] disabling docker service ...
	I1126 20:48:00.757998  206678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:48:00.782242  206678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:48:00.798104  206678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:48:00.987561  206678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:48:01.159035  206678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:48:01.178182  206678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:48:01.207387  206678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:48:01.207446  206678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.226827  206678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:48:01.226980  206678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.240611  206678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.250864  206678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.266769  206678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:48:01.282655  206678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.299296  206678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.317761  206678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.337004  206678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:48:01.351247  206678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:48:01.367827  206678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:48:01.577163  206678 ssh_runner.go:195] Run: sudo systemctl restart crio
	
	
	==> CRI-O <==
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.158395926Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2455e69b-bf4f-42f1-b6ce-c0e65697c451 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.160247886Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=08b19a49-63d1-4bcb-aabd-1fb4a6a9ae3a name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.161728855Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq/dashboard-metrics-scraper" id=729de349-5a2d-4416-83d0-6c3af184aa43 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.161869708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.178277352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.179145845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.20010479Z" level=info msg="Created container dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq/dashboard-metrics-scraper" id=729de349-5a2d-4416-83d0-6c3af184aa43 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.202285412Z" level=info msg="Starting container: dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06" id=3f78be56-db5b-4574-9006-7ed3fb2e3549 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.207704927Z" level=info msg="Started container" PID=1659 containerID=dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq/dashboard-metrics-scraper id=3f78be56-db5b-4574-9006-7ed3fb2e3549 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67b17ca1b67f7c9236549e5bcba01e44670bcb917456901abfa98027ef1a25b6
	Nov 26 20:47:42 old-k8s-version-264537 conmon[1657]: conmon dc08ffa195fa3f67a325 <ninfo>: container 1659 exited with status 1
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.746331999Z" level=info msg="Removing container: e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554" id=838058ce-f55b-43f5-959c-9ea2dd75f3fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.756862461Z" level=info msg="Error loading conmon cgroup of container e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554: cgroup deleted" id=838058ce-f55b-43f5-959c-9ea2dd75f3fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.76179944Z" level=info msg="Removed container e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq/dashboard-metrics-scraper" id=838058ce-f55b-43f5-959c-9ea2dd75f3fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.348873071Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.353334472Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.353370656Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.353404574Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.356770988Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.356829119Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.356871916Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.360082798Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.360177523Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.360201178Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.363158521Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.363190717Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	dc08ffa195fa3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   67b17ca1b67f7       dashboard-metrics-scraper-5f989dc9cf-fwkcq       kubernetes-dashboard
	1a32d3b9e1883       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago      Running             storage-provisioner         2                   d531340ed6f2f       storage-provisioner                              kube-system
	6b03c1e9591cf       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   999806ebfe563       kubernetes-dashboard-8694d4445c-zpz9j            kubernetes-dashboard
	388e7dcee4c17       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           53 seconds ago      Running             coredns                     1                   64fc3b4f4df93       coredns-5dd5756b68-w99t5                         kube-system
	806b8f773ec93       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   b50827bae7112       busybox                                          default
	bbe5e0cd6e0ff       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   d531340ed6f2f       storage-provisioner                              kube-system
	572c2c85ed7d7       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           54 seconds ago      Running             kube-proxy                  1                   f0d6a3a3a42c7       kube-proxy-9rv9c                                 kube-system
	cebff254eb17a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago      Running             kindnet-cni                 1                   1619fb7047a49       kindnet-6k58p                                    kube-system
	861cdf83e26ec       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           59 seconds ago      Running             kube-controller-manager     1                   ec61a32c82932       kube-controller-manager-old-k8s-version-264537   kube-system
	f3a99a92a571f       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           59 seconds ago      Running             kube-scheduler              1                   92641896475fc       kube-scheduler-old-k8s-version-264537            kube-system
	400d8fb8f7491       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           59 seconds ago      Running             etcd                        1                   31fb6c31328e9       etcd-old-k8s-version-264537                      kube-system
	aa3ee34dbdfd3       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           59 seconds ago      Running             kube-apiserver              1                   556feb345390a       kube-apiserver-old-k8s-version-264537            kube-system
	
	
	==> coredns [388e7dcee4c17a2c45f2d8d832a2db44f06e5528a2a10d8c3df08d344c25a223] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44858 - 38215 "HINFO IN 1499662222745090337.227637672896505582. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004364031s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-264537
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-264537
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=old-k8s-version-264537
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_46_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:45:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-264537
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:48:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:47:39 +0000   Wed, 26 Nov 2025 20:45:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:47:39 +0000   Wed, 26 Nov 2025 20:45:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:47:39 +0000   Wed, 26 Nov 2025 20:45:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:47:39 +0000   Wed, 26 Nov 2025 20:46:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-264537
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                8b1866d5-0ca9-4303-8791-a0bc9b937ae1
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-w99t5                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-old-k8s-version-264537                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-6k58p                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-264537             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-264537    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-9rv9c                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-264537             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-fwkcq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-zpz9j             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 2m2s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node old-k8s-version-264537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node old-k8s-version-264537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node old-k8s-version-264537 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           110s               node-controller  Node old-k8s-version-264537 event: Registered Node old-k8s-version-264537 in Controller
	  Normal  NodeReady                95s                kubelet          Node old-k8s-version-264537 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node old-k8s-version-264537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node old-k8s-version-264537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node old-k8s-version-264537 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                node-controller  Node old-k8s-version-264537 event: Registered Node old-k8s-version-264537 in Controller
	
	
	==> dmesg <==
	[Nov26 20:16] overlayfs: idmapped layers are currently not supported
	[Nov26 20:21] overlayfs: idmapped layers are currently not supported
	[ +33.563196] overlayfs: idmapped layers are currently not supported
	[Nov26 20:23] overlayfs: idmapped layers are currently not supported
	[Nov26 20:24] overlayfs: idmapped layers are currently not supported
	[Nov26 20:25] overlayfs: idmapped layers are currently not supported
	[Nov26 20:27] overlayfs: idmapped layers are currently not supported
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [400d8fb8f7491a2ea343f20eb65e87a3111f2b78d29d6524dbd6edd1594351e6] <==
	{"level":"info","ts":"2025-11-26T20:47:04.353166Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-26T20:47:04.353174Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-26T20:47:04.353357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-26T20:47:04.353417Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-26T20:47:04.353482Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:47:04.353506Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:47:04.420852Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-26T20:47:04.434265Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-26T20:47:04.434318Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-26T20:47:04.434395Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-26T20:47:04.434403Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-26T20:47:05.501974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-26T20:47:05.502096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-26T20:47:05.502171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-26T20:47:05.502211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-26T20:47:05.502242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-26T20:47:05.502275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-26T20:47:05.502308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-26T20:47:05.506105Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-264537 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-26T20:47:05.506196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:47:05.507289Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-26T20:47:05.506217Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:47:05.518619Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-26T20:47:05.518715Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-26T20:47:05.52616Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:48:04 up  1:30,  0 user,  load average: 2.76, 2.97, 2.35
	Linux old-k8s-version-264537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cebff254eb17a577d788fffed5cf8c4fbba80094b1b83ce0d7aa765376039071] <==
	I1126 20:47:10.194919       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:47:10.195124       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:47:10.195238       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:47:10.195249       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:47:10.195260       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:47:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:47:10.347799       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:47:10.347819       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:47:10.347827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:47:10.348106       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:47:40.346442       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:47:40.348023       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:47:40.348100       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:47:40.349354       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1126 20:47:41.648374       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:47:41.648403       1 metrics.go:72] Registering metrics
	I1126 20:47:41.648472       1 controller.go:711] "Syncing nftables rules"
	I1126 20:47:50.347601       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:47:50.347665       1 main.go:301] handling current node
	I1126 20:48:00.350025       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:48:00.350066       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aa3ee34dbdfd346cd8a9d14474e49263180adb69e68a3838f6741dca0ea9cdab] <==
	I1126 20:47:09.167293       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1126 20:47:09.169156       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:47:09.195212       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1126 20:47:09.222000       1 aggregator.go:166] initial CRD sync complete...
	I1126 20:47:09.222083       1 autoregister_controller.go:141] Starting autoregister controller
	I1126 20:47:09.222115       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:47:09.222154       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:47:09.231200       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:47:09.284733       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1126 20:47:09.833309       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:47:10.987873       1 controller.go:624] quota admission added evaluator for: namespaces
	I1126 20:47:11.053676       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1126 20:47:11.080411       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:47:11.097283       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:47:11.120927       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1126 20:47:11.206208       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.109.177"}
	I1126 20:47:11.262785       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.217.40"}
	E1126 20:47:19.161066       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I1126 20:47:21.767909       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1126 20:47:21.784685       1 controller.go:624] quota admission added evaluator for: endpoints
	I1126 20:47:21.928628       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1126 20:47:29.161318       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1126 20:47:39.161886       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E1126 20:47:49.162144       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1126 20:47:59.163369       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [861cdf83e26ecaeb9c2086ba6ee2b898b58cb27f499652485c8d87139834385c] <==
	I1126 20:47:21.921007       1 shared_informer.go:318] Caches are synced for node
	I1126 20:47:21.922881       1 range_allocator.go:174] "Sending events to api server"
	I1126 20:47:21.922987       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1126 20:47:21.923036       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1126 20:47:21.923068       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1126 20:47:21.925200       1 shared_informer.go:318] Caches are synced for taint
	I1126 20:47:21.925395       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1126 20:47:21.925551       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1126 20:47:21.925646       1 taint_manager.go:211] "Sending events to api server"
	I1126 20:47:21.926048       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-264537"
	I1126 20:47:21.926132       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1126 20:47:21.926230       1 event.go:307] "Event occurred" object="old-k8s-version-264537" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-264537 event: Registered Node old-k8s-version-264537 in Controller"
	I1126 20:47:21.934492       1 shared_informer.go:318] Caches are synced for TTL
	I1126 20:47:22.276347       1 shared_informer.go:318] Caches are synced for garbage collector
	I1126 20:47:22.276384       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1126 20:47:22.295934       1 shared_informer.go:318] Caches are synced for garbage collector
	I1126 20:47:27.739365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.93929ms"
	I1126 20:47:27.740177       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.048µs"
	I1126 20:47:31.731211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.359µs"
	I1126 20:47:32.737364       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.08µs"
	I1126 20:47:33.735944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.375µs"
	I1126 20:47:42.764584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.97µs"
	I1126 20:47:47.160187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.602158ms"
	I1126 20:47:47.160299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.161µs"
	I1126 20:47:52.181257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.486µs"
	
	
	==> kube-proxy [572c2c85ed7d71acf7cd0c767201ce638ca7e6d276cc20883a0484e7f244d60c] <==
	I1126 20:47:10.340845       1 server_others.go:69] "Using iptables proxy"
	I1126 20:47:10.367102       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1126 20:47:10.418579       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:47:10.434425       1 server_others.go:152] "Using iptables Proxier"
	I1126 20:47:10.441194       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1126 20:47:10.441216       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1126 20:47:10.441247       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1126 20:47:10.441488       1 server.go:846] "Version info" version="v1.28.0"
	I1126 20:47:10.441498       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:47:10.442557       1 config.go:188] "Starting service config controller"
	I1126 20:47:10.442583       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1126 20:47:10.442603       1 config.go:97] "Starting endpoint slice config controller"
	I1126 20:47:10.442606       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1126 20:47:10.444904       1 config.go:315] "Starting node config controller"
	I1126 20:47:10.444916       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1126 20:47:10.542936       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1126 20:47:10.542984       1 shared_informer.go:318] Caches are synced for service config
	I1126 20:47:10.545260       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f3a99a92a571f35772e62af2f45fb5484878af13d4e0ff35e1338a2d989b68d4] <==
	I1126 20:47:07.273509       1 serving.go:348] Generated self-signed cert in-memory
	W1126 20:47:09.037378       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:47:09.037481       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:47:09.037515       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:47:09.037557       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:47:09.172185       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1126 20:47:09.172227       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:47:09.177416       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1126 20:47:09.177594       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:47:09.177636       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1126 20:47:09.177697       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1126 20:47:09.278185       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 26 20:47:21 old-k8s-version-264537 kubelet[789]: I1126 20:47:21.850588     789 topology_manager.go:215] "Topology Admit Handler" podUID="88b5eb99-bcb6-4aae-b2a8-afb053c2093c" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-zpz9j"
	Nov 26 20:47:21 old-k8s-version-264537 kubelet[789]: I1126 20:47:21.855012     789 topology_manager.go:215] "Topology Admit Handler" podUID="a26d05da-1c97-4489-89f6-9461174500e9" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-fwkcq"
	Nov 26 20:47:21 old-k8s-version-264537 kubelet[789]: I1126 20:47:21.903417     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxfcj\" (UniqueName: \"kubernetes.io/projected/88b5eb99-bcb6-4aae-b2a8-afb053c2093c-kube-api-access-wxfcj\") pod \"kubernetes-dashboard-8694d4445c-zpz9j\" (UID: \"88b5eb99-bcb6-4aae-b2a8-afb053c2093c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-zpz9j"
	Nov 26 20:47:21 old-k8s-version-264537 kubelet[789]: I1126 20:47:21.903479     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a26d05da-1c97-4489-89f6-9461174500e9-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-fwkcq\" (UID: \"a26d05da-1c97-4489-89f6-9461174500e9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq"
	Nov 26 20:47:21 old-k8s-version-264537 kubelet[789]: I1126 20:47:21.903511     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxzk\" (UniqueName: \"kubernetes.io/projected/a26d05da-1c97-4489-89f6-9461174500e9-kube-api-access-cjxzk\") pod \"dashboard-metrics-scraper-5f989dc9cf-fwkcq\" (UID: \"a26d05da-1c97-4489-89f6-9461174500e9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq"
	Nov 26 20:47:21 old-k8s-version-264537 kubelet[789]: I1126 20:47:21.903537     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/88b5eb99-bcb6-4aae-b2a8-afb053c2093c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-zpz9j\" (UID: \"88b5eb99-bcb6-4aae-b2a8-afb053c2093c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-zpz9j"
	Nov 26 20:47:22 old-k8s-version-264537 kubelet[789]: W1126 20:47:22.183224     789 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/crio-999806ebfe5633117192b24d5b68fefd5c9eee4ae59b1181b88458a03dd8a2a7 WatchSource:0}: Error finding container 999806ebfe5633117192b24d5b68fefd5c9eee4ae59b1181b88458a03dd8a2a7: Status 404 returned error can't find the container with id 999806ebfe5633117192b24d5b68fefd5c9eee4ae59b1181b88458a03dd8a2a7
	Nov 26 20:47:31 old-k8s-version-264537 kubelet[789]: I1126 20:47:31.709362     789 scope.go:117] "RemoveContainer" containerID="8103a63ef2fcabfbacabd12941e51bc1c098bd71cb4c9b82798f342ad5cb8f7a"
	Nov 26 20:47:31 old-k8s-version-264537 kubelet[789]: I1126 20:47:31.735948     789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-zpz9j" podStartSLOduration=6.016597476 podCreationTimestamp="2025-11-26 20:47:21 +0000 UTC" firstStartedPulling="2025-11-26 20:47:22.187253223 +0000 UTC m=+18.809561382" lastFinishedPulling="2025-11-26 20:47:26.906537462 +0000 UTC m=+23.528845637" observedRunningTime="2025-11-26 20:47:27.725563318 +0000 UTC m=+24.347871485" watchObservedRunningTime="2025-11-26 20:47:31.735881731 +0000 UTC m=+28.358189889"
	Nov 26 20:47:32 old-k8s-version-264537 kubelet[789]: I1126 20:47:32.713779     789 scope.go:117] "RemoveContainer" containerID="e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554"
	Nov 26 20:47:32 old-k8s-version-264537 kubelet[789]: I1126 20:47:32.714088     789 scope.go:117] "RemoveContainer" containerID="8103a63ef2fcabfbacabd12941e51bc1c098bd71cb4c9b82798f342ad5cb8f7a"
	Nov 26 20:47:32 old-k8s-version-264537 kubelet[789]: E1126 20:47:32.718824     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fwkcq_kubernetes-dashboard(a26d05da-1c97-4489-89f6-9461174500e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq" podUID="a26d05da-1c97-4489-89f6-9461174500e9"
	Nov 26 20:47:33 old-k8s-version-264537 kubelet[789]: I1126 20:47:33.717780     789 scope.go:117] "RemoveContainer" containerID="e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554"
	Nov 26 20:47:33 old-k8s-version-264537 kubelet[789]: E1126 20:47:33.718107     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fwkcq_kubernetes-dashboard(a26d05da-1c97-4489-89f6-9461174500e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq" podUID="a26d05da-1c97-4489-89f6-9461174500e9"
	Nov 26 20:47:40 old-k8s-version-264537 kubelet[789]: I1126 20:47:40.735773     789 scope.go:117] "RemoveContainer" containerID="bbe5e0cd6e0ff7a9722e7413ce8f89636a2abf001545e870532eebd22a93e60e"
	Nov 26 20:47:42 old-k8s-version-264537 kubelet[789]: I1126 20:47:42.157542     789 scope.go:117] "RemoveContainer" containerID="e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554"
	Nov 26 20:47:42 old-k8s-version-264537 kubelet[789]: I1126 20:47:42.744069     789 scope.go:117] "RemoveContainer" containerID="e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554"
	Nov 26 20:47:42 old-k8s-version-264537 kubelet[789]: I1126 20:47:42.744246     789 scope.go:117] "RemoveContainer" containerID="dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06"
	Nov 26 20:47:42 old-k8s-version-264537 kubelet[789]: E1126 20:47:42.744528     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fwkcq_kubernetes-dashboard(a26d05da-1c97-4489-89f6-9461174500e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq" podUID="a26d05da-1c97-4489-89f6-9461174500e9"
	Nov 26 20:47:52 old-k8s-version-264537 kubelet[789]: I1126 20:47:52.158064     789 scope.go:117] "RemoveContainer" containerID="dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06"
	Nov 26 20:47:52 old-k8s-version-264537 kubelet[789]: E1126 20:47:52.158374     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fwkcq_kubernetes-dashboard(a26d05da-1c97-4489-89f6-9461174500e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq" podUID="a26d05da-1c97-4489-89f6-9461174500e9"
	Nov 26 20:48:01 old-k8s-version-264537 kubelet[789]: I1126 20:48:01.498638     789 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 26 20:48:01 old-k8s-version-264537 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:48:01 old-k8s-version-264537 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:48:01 old-k8s-version-264537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [6b03c1e9591cf653833abe711757adaa1bdfd1190816fd296d1f0a76357eae13] <==
	2025/11/26 20:47:26 Starting overwatch
	2025/11/26 20:47:26 Using namespace: kubernetes-dashboard
	2025/11/26 20:47:26 Using in-cluster config to connect to apiserver
	2025/11/26 20:47:26 Using secret token for csrf signing
	2025/11/26 20:47:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:47:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:47:26 Successful initial request to the apiserver, version: v1.28.0
	2025/11/26 20:47:26 Generating JWE encryption key
	2025/11/26 20:47:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:47:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:47:28 Initializing JWE encryption key from synchronized object
	2025/11/26 20:47:28 Creating in-cluster Sidecar client
	2025/11/26 20:47:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:47:28 Serving insecurely on HTTP port: 9090
	2025/11/26 20:47:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1a32d3b9e1883a1260ba649c81bec9f5cb7ef22f8f4590f95375ae969df1afa3] <==
	I1126 20:47:40.787861       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:47:40.800640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:47:40.800685       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1126 20:47:58.196965       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:47:58.197146       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-264537_a47e6b84-3eee-4f8e-b44f-3f9bca49c9bf!
	I1126 20:47:58.197404       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24d009d6-7643-48bd-8682-d8a75e344fd3", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-264537_a47e6b84-3eee-4f8e-b44f-3f9bca49c9bf became leader
	I1126 20:47:58.297808       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-264537_a47e6b84-3eee-4f8e-b44f-3f9bca49c9bf!
	
	
	==> storage-provisioner [bbe5e0cd6e0ff7a9722e7413ce8f89636a2abf001545e870532eebd22a93e60e] <==
	I1126 20:47:10.275253       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:47:40.283375       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-264537 -n old-k8s-version-264537
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-264537 -n old-k8s-version-264537: exit status 2 (362.619289ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-264537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-264537
helpers_test.go:243: (dbg) docker inspect old-k8s-version-264537:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747",
	        "Created": "2025-11-26T20:45:36.56908992Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204624,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:46:56.490187296Z",
	            "FinishedAt": "2025-11-26T20:46:55.68738797Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/hostname",
	        "HostsPath": "/var/lib/docker/containers/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/hosts",
	        "LogPath": "/var/lib/docker/containers/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747-json.log",
	        "Name": "/old-k8s-version-264537",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-264537:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-264537",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747",
	                "LowerDir": "/var/lib/docker/overlay2/7051b00bcce0d8072bca16b9cd942f07c121d04f16461ee338a38ce225cd81cb-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7051b00bcce0d8072bca16b9cd942f07c121d04f16461ee338a38ce225cd81cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7051b00bcce0d8072bca16b9cd942f07c121d04f16461ee338a38ce225cd81cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7051b00bcce0d8072bca16b9cd942f07c121d04f16461ee338a38ce225cd81cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-264537",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-264537/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-264537",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-264537",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-264537",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "76de3de1debeb9f2d25049e20f8a9d1998bd09952db79fc7a96437ae230caf2d",
	            "SandboxKey": "/var/run/docker/netns/76de3de1debe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-264537": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:4a:ee:a7:c5:85",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a0df607f6641f4214aa99b2f7e135610ec93c7d857cfae2423703322c6f61751",
	                    "EndpointID": "7a9c4cb75d2ee3539c03aa27baa3dcfae6bf9f3f3f4627aec02cd076b9f3ae12",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-264537",
	                        "a5e16735df4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264537 -n old-k8s-version-264537
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264537 -n old-k8s-version-264537: exit status 2 (345.681955ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-264537 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-264537 logs -n 25: (1.248147369s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-235709 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo containerd config dump                                                                                                                                                                                                  │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo crio config                                                                                                                                                                                                             │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ delete  │ -p cilium-235709                                                                                                                                                                                                                              │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p force-systemd-env-274518 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-274518  │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ ssh     │ force-systemd-flag-622960 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-622960 │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ delete  │ -p force-systemd-flag-622960                                                                                                                                                                                                                  │ force-systemd-flag-622960 │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-164741    │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ delete  │ -p force-systemd-env-274518                                                                                                                                                                                                                   │ force-systemd-env-274518  │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p cert-options-207115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:45 UTC │
	│ ssh     │ cert-options-207115 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ ssh     │ -p cert-options-207115 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ delete  │ -p cert-options-207115                                                                                                                                                                                                                        │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-264537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │                     │
	│ stop    │ -p old-k8s-version-264537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-264537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:47 UTC │
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164741    │ jenkins │ v1.37.0 │ 26 Nov 25 20:47 UTC │                     │
	│ image   │ old-k8s-version-264537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ pause   │ -p old-k8s-version-264537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:47:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:47:52.681527  206678 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:47:52.681630  206678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:47:52.681634  206678 out.go:374] Setting ErrFile to fd 2...
	I1126 20:47:52.681638  206678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:47:52.681871  206678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:47:52.682240  206678 out.go:368] Setting JSON to false
	I1126 20:47:52.683138  206678 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5403,"bootTime":1764184670,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:47:52.683191  206678 start.go:143] virtualization:  
	I1126 20:47:52.686531  206678 out.go:179] * [cert-expiration-164741] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:47:52.689551  206678 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:47:52.689647  206678 notify.go:221] Checking for updates...
	I1126 20:47:52.695178  206678 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:47:52.698145  206678 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:47:52.701615  206678 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:47:52.704400  206678 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:47:52.707364  206678 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:47:52.710558  206678 config.go:182] Loaded profile config "cert-expiration-164741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:47:52.711091  206678 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:47:52.739231  206678 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:47:52.739336  206678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:47:52.809006  206678 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-26 20:47:52.799833852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:47:52.809102  206678 docker.go:319] overlay module found
	I1126 20:47:52.812129  206678 out.go:179] * Using the docker driver based on existing profile
	I1126 20:47:52.815054  206678 start.go:309] selected driver: docker
	I1126 20:47:52.815063  206678 start.go:927] validating driver "docker" against &{Name:cert-expiration-164741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-164741 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:47:52.815148  206678 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:47:52.815869  206678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:47:52.874018  206678 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-26 20:47:52.865081337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:47:52.874311  206678 cni.go:84] Creating CNI manager for ""
	I1126 20:47:52.874369  206678 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:47:52.874406  206678 start.go:353] cluster config:
	{Name:cert-expiration-164741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-164741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1126 20:47:52.877440  206678 out.go:179] * Starting "cert-expiration-164741" primary control-plane node in "cert-expiration-164741" cluster
	I1126 20:47:52.880382  206678 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:47:52.883465  206678 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:47:52.886368  206678 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:47:52.886404  206678 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:47:52.886413  206678 cache.go:65] Caching tarball of preloaded images
	I1126 20:47:52.886428  206678 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:47:52.886498  206678 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:47:52.886507  206678 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:47:52.886615  206678 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/cert-expiration-164741/config.json ...
	I1126 20:47:52.909112  206678 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:47:52.909123  206678 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:47:52.909136  206678 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:47:52.909166  206678 start.go:360] acquireMachinesLock for cert-expiration-164741: {Name:mka3ecf1e428c26500994e5e1766791d0c225fa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:47:52.909229  206678 start.go:364] duration metric: took 46.054µs to acquireMachinesLock for "cert-expiration-164741"
	I1126 20:47:52.909248  206678 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:47:52.909252  206678 fix.go:54] fixHost starting: 
	I1126 20:47:52.909532  206678 cli_runner.go:164] Run: docker container inspect cert-expiration-164741 --format={{.State.Status}}
	I1126 20:47:52.926133  206678 fix.go:112] recreateIfNeeded on cert-expiration-164741: state=Running err=<nil>
	W1126 20:47:52.926169  206678 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:47:52.929482  206678 out.go:252] * Updating the running docker "cert-expiration-164741" container ...
	I1126 20:47:52.929505  206678 machine.go:94] provisionDockerMachine start ...
	I1126 20:47:52.929576  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:52.947051  206678 main.go:143] libmachine: Using SSH client type: native
	I1126 20:47:52.947374  206678 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1126 20:47:52.947380  206678 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:47:53.106443  206678 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-164741
	
	I1126 20:47:53.106456  206678 ubuntu.go:182] provisioning hostname "cert-expiration-164741"
	I1126 20:47:53.106515  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:53.127275  206678 main.go:143] libmachine: Using SSH client type: native
	I1126 20:47:53.127588  206678 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1126 20:47:53.127596  206678 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-164741 && echo "cert-expiration-164741" | sudo tee /etc/hostname
	I1126 20:47:53.292322  206678 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-164741
	
	I1126 20:47:53.292402  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:53.310734  206678 main.go:143] libmachine: Using SSH client type: native
	I1126 20:47:53.311029  206678 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1126 20:47:53.311043  206678 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-164741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-164741/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-164741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:47:53.462183  206678 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:47:53.462197  206678 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:47:53.462214  206678 ubuntu.go:190] setting up certificates
	I1126 20:47:53.462223  206678 provision.go:84] configureAuth start
	I1126 20:47:53.462279  206678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-164741
	I1126 20:47:53.480400  206678 provision.go:143] copyHostCerts
	I1126 20:47:53.480458  206678 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:47:53.480471  206678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:47:53.480558  206678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:47:53.480674  206678 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:47:53.480678  206678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:47:53.480704  206678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:47:53.480750  206678 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:47:53.480754  206678 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:47:53.480776  206678 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:47:53.480823  206678 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-164741 san=[127.0.0.1 192.168.85.2 cert-expiration-164741 localhost minikube]
	I1126 20:47:53.720228  206678 provision.go:177] copyRemoteCerts
	I1126 20:47:53.720279  206678 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:47:53.720314  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:53.741714  206678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/cert-expiration-164741/id_rsa Username:docker}
	I1126 20:47:53.850637  206678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:47:53.870529  206678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1126 20:47:53.889021  206678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:47:53.909027  206678 provision.go:87] duration metric: took 446.781591ms to configureAuth
	I1126 20:47:53.909044  206678 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:47:53.909245  206678 config.go:182] Loaded profile config "cert-expiration-164741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:47:53.909371  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:53.926979  206678 main.go:143] libmachine: Using SSH client type: native
	I1126 20:47:53.927277  206678 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I1126 20:47:53.927289  206678 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:47:59.345916  206678 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:47:59.345998  206678 machine.go:97] duration metric: took 6.416487237s to provisionDockerMachine
	I1126 20:47:59.346008  206678 start.go:293] postStartSetup for "cert-expiration-164741" (driver="docker")
	I1126 20:47:59.346018  206678 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:47:59.346086  206678 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:47:59.346124  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:59.364160  206678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/cert-expiration-164741/id_rsa Username:docker}
	I1126 20:47:59.469574  206678 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:47:59.472894  206678 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:47:59.472911  206678 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:47:59.472920  206678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:47:59.472970  206678 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:47:59.473043  206678 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:47:59.473134  206678 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:47:59.480380  206678 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:47:59.496687  206678 start.go:296] duration metric: took 150.665893ms for postStartSetup
	I1126 20:47:59.496752  206678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:47:59.496787  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:59.514678  206678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/cert-expiration-164741/id_rsa Username:docker}
	I1126 20:47:59.615449  206678 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:47:59.620931  206678 fix.go:56] duration metric: took 6.71167234s for fixHost
	I1126 20:47:59.620947  206678 start.go:83] releasing machines lock for "cert-expiration-164741", held for 6.711710796s
	I1126 20:47:59.621013  206678 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-164741
	I1126 20:47:59.638689  206678 ssh_runner.go:195] Run: cat /version.json
	I1126 20:47:59.638732  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:59.638996  206678 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:47:59.639055  206678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-164741
	I1126 20:47:59.656859  206678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/cert-expiration-164741/id_rsa Username:docker}
	I1126 20:47:59.670875  206678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/cert-expiration-164741/id_rsa Username:docker}
	I1126 20:47:59.765624  206678 ssh_runner.go:195] Run: systemctl --version
	I1126 20:47:59.878771  206678 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:47:59.939378  206678 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:47:59.943930  206678 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:47:59.943997  206678 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:47:59.953076  206678 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:47:59.953098  206678 start.go:496] detecting cgroup driver to use...
	I1126 20:47:59.953128  206678 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:47:59.953171  206678 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:47:59.968250  206678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:47:59.980935  206678 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:47:59.980987  206678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:47:59.996467  206678 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:48:00.015081  206678 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:48:00.572570  206678 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:48:00.757899  206678 docker.go:234] disabling docker service ...
	I1126 20:48:00.757998  206678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:48:00.782242  206678 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:48:00.798104  206678 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:48:00.987561  206678 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:48:01.159035  206678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:48:01.178182  206678 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:48:01.207387  206678 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:48:01.207446  206678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.226827  206678 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:48:01.226980  206678 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.240611  206678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.250864  206678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.266769  206678 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:48:01.282655  206678 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.299296  206678 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.317761  206678 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:01.337004  206678 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:48:01.351247  206678 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:48:01.367827  206678 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:48:01.577163  206678 ssh_runner.go:195] Run: sudo systemctl restart crio
	
	
	==> CRI-O <==
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.158395926Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2455e69b-bf4f-42f1-b6ce-c0e65697c451 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.160247886Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=08b19a49-63d1-4bcb-aabd-1fb4a6a9ae3a name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.161728855Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq/dashboard-metrics-scraper" id=729de349-5a2d-4416-83d0-6c3af184aa43 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.161869708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.178277352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.179145845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.20010479Z" level=info msg="Created container dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq/dashboard-metrics-scraper" id=729de349-5a2d-4416-83d0-6c3af184aa43 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.202285412Z" level=info msg="Starting container: dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06" id=3f78be56-db5b-4574-9006-7ed3fb2e3549 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.207704927Z" level=info msg="Started container" PID=1659 containerID=dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq/dashboard-metrics-scraper id=3f78be56-db5b-4574-9006-7ed3fb2e3549 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67b17ca1b67f7c9236549e5bcba01e44670bcb917456901abfa98027ef1a25b6
	Nov 26 20:47:42 old-k8s-version-264537 conmon[1657]: conmon dc08ffa195fa3f67a325 <ninfo>: container 1659 exited with status 1
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.746331999Z" level=info msg="Removing container: e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554" id=838058ce-f55b-43f5-959c-9ea2dd75f3fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.756862461Z" level=info msg="Error loading conmon cgroup of container e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554: cgroup deleted" id=838058ce-f55b-43f5-959c-9ea2dd75f3fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:47:42 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:42.76179944Z" level=info msg="Removed container e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq/dashboard-metrics-scraper" id=838058ce-f55b-43f5-959c-9ea2dd75f3fb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.348873071Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.353334472Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.353370656Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.353404574Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.356770988Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.356829119Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.356871916Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.360082798Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.360177523Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.360201178Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.363158521Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:47:50 old-k8s-version-264537 crio[658]: time="2025-11-26T20:47:50.363190717Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	dc08ffa195fa3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   67b17ca1b67f7       dashboard-metrics-scraper-5f989dc9cf-fwkcq       kubernetes-dashboard
	1a32d3b9e1883       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   d531340ed6f2f       storage-provisioner                              kube-system
	6b03c1e9591cf       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   999806ebfe563       kubernetes-dashboard-8694d4445c-zpz9j            kubernetes-dashboard
	388e7dcee4c17       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   64fc3b4f4df93       coredns-5dd5756b68-w99t5                         kube-system
	806b8f773ec93       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   b50827bae7112       busybox                                          default
	bbe5e0cd6e0ff       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   d531340ed6f2f       storage-provisioner                              kube-system
	572c2c85ed7d7       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   f0d6a3a3a42c7       kube-proxy-9rv9c                                 kube-system
	cebff254eb17a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   1619fb7047a49       kindnet-6k58p                                    kube-system
	861cdf83e26ec       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   ec61a32c82932       kube-controller-manager-old-k8s-version-264537   kube-system
	f3a99a92a571f       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   92641896475fc       kube-scheduler-old-k8s-version-264537            kube-system
	400d8fb8f7491       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   31fb6c31328e9       etcd-old-k8s-version-264537                      kube-system
	aa3ee34dbdfd3       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   556feb345390a       kube-apiserver-old-k8s-version-264537            kube-system
	
	
	==> coredns [388e7dcee4c17a2c45f2d8d832a2db44f06e5528a2a10d8c3df08d344c25a223] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44858 - 38215 "HINFO IN 1499662222745090337.227637672896505582. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004364031s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-264537
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-264537
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=old-k8s-version-264537
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_46_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:45:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-264537
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:48:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:47:39 +0000   Wed, 26 Nov 2025 20:45:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:47:39 +0000   Wed, 26 Nov 2025 20:45:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:47:39 +0000   Wed, 26 Nov 2025 20:45:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:47:39 +0000   Wed, 26 Nov 2025 20:46:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-264537
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                8b1866d5-0ca9-4303-8791-a0bc9b937ae1
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-w99t5                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-old-k8s-version-264537                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-6k58p                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-264537             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-264537    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-9rv9c                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-264537             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-fwkcq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-zpz9j             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 2m4s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s               kubelet          Node old-k8s-version-264537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s               kubelet          Node old-k8s-version-264537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s               kubelet          Node old-k8s-version-264537 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           112s               node-controller  Node old-k8s-version-264537 event: Registered Node old-k8s-version-264537 in Controller
	  Normal  NodeReady                97s                kubelet          Node old-k8s-version-264537 status is now: NodeReady
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node old-k8s-version-264537 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node old-k8s-version-264537 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node old-k8s-version-264537 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-264537 event: Registered Node old-k8s-version-264537 in Controller
	
	
	==> dmesg <==
	[Nov26 20:16] overlayfs: idmapped layers are currently not supported
	[Nov26 20:21] overlayfs: idmapped layers are currently not supported
	[ +33.563196] overlayfs: idmapped layers are currently not supported
	[Nov26 20:23] overlayfs: idmapped layers are currently not supported
	[Nov26 20:24] overlayfs: idmapped layers are currently not supported
	[Nov26 20:25] overlayfs: idmapped layers are currently not supported
	[Nov26 20:27] overlayfs: idmapped layers are currently not supported
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [400d8fb8f7491a2ea343f20eb65e87a3111f2b78d29d6524dbd6edd1594351e6] <==
	{"level":"info","ts":"2025-11-26T20:47:04.353166Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-26T20:47:04.353174Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-26T20:47:04.353357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-26T20:47:04.353417Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-26T20:47:04.353482Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:47:04.353506Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-26T20:47:04.420852Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-26T20:47:04.434265Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-26T20:47:04.434318Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-26T20:47:04.434395Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-26T20:47:04.434403Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-26T20:47:05.501974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-26T20:47:05.502096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-26T20:47:05.502171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-26T20:47:05.502211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-26T20:47:05.502242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-26T20:47:05.502275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-26T20:47:05.502308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-26T20:47:05.506105Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-264537 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-26T20:47:05.506196Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:47:05.507289Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-26T20:47:05.506217Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-26T20:47:05.518619Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-26T20:47:05.518715Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-26T20:47:05.52616Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:48:06 up  1:30,  0 user,  load average: 2.62, 2.94, 2.34
	Linux old-k8s-version-264537 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cebff254eb17a577d788fffed5cf8c4fbba80094b1b83ce0d7aa765376039071] <==
	I1126 20:47:10.194919       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:47:10.195124       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:47:10.195238       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:47:10.195249       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:47:10.195260       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:47:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:47:10.347799       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:47:10.347819       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:47:10.347827       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:47:10.348106       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:47:40.346442       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:47:40.348023       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:47:40.348100       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:47:40.349354       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1126 20:47:41.648374       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:47:41.648403       1 metrics.go:72] Registering metrics
	I1126 20:47:41.648472       1 controller.go:711] "Syncing nftables rules"
	I1126 20:47:50.347601       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:47:50.347665       1 main.go:301] handling current node
	I1126 20:48:00.350025       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:48:00.350066       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aa3ee34dbdfd346cd8a9d14474e49263180adb69e68a3838f6741dca0ea9cdab] <==
	I1126 20:47:09.167293       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1126 20:47:09.169156       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:47:09.195212       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1126 20:47:09.222000       1 aggregator.go:166] initial CRD sync complete...
	I1126 20:47:09.222083       1 autoregister_controller.go:141] Starting autoregister controller
	I1126 20:47:09.222115       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:47:09.222154       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:47:09.231200       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:47:09.284733       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1126 20:47:09.833309       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:47:10.987873       1 controller.go:624] quota admission added evaluator for: namespaces
	I1126 20:47:11.053676       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1126 20:47:11.080411       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:47:11.097283       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:47:11.120927       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1126 20:47:11.206208       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.109.177"}
	I1126 20:47:11.262785       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.217.40"}
	E1126 20:47:19.161066       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I1126 20:47:21.767909       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1126 20:47:21.784685       1 controller.go:624] quota admission added evaluator for: endpoints
	I1126 20:47:21.928628       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1126 20:47:29.161318       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1126 20:47:39.161886       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E1126 20:47:49.162144       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1126 20:47:59.163369       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [861cdf83e26ecaeb9c2086ba6ee2b898b58cb27f499652485c8d87139834385c] <==
	I1126 20:47:21.921007       1 shared_informer.go:318] Caches are synced for node
	I1126 20:47:21.922881       1 range_allocator.go:174] "Sending events to api server"
	I1126 20:47:21.922987       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1126 20:47:21.923036       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1126 20:47:21.923068       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1126 20:47:21.925200       1 shared_informer.go:318] Caches are synced for taint
	I1126 20:47:21.925395       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1126 20:47:21.925551       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1126 20:47:21.925646       1 taint_manager.go:211] "Sending events to api server"
	I1126 20:47:21.926048       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-264537"
	I1126 20:47:21.926132       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1126 20:47:21.926230       1 event.go:307] "Event occurred" object="old-k8s-version-264537" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-264537 event: Registered Node old-k8s-version-264537 in Controller"
	I1126 20:47:21.934492       1 shared_informer.go:318] Caches are synced for TTL
	I1126 20:47:22.276347       1 shared_informer.go:318] Caches are synced for garbage collector
	I1126 20:47:22.276384       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1126 20:47:22.295934       1 shared_informer.go:318] Caches are synced for garbage collector
	I1126 20:47:27.739365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.93929ms"
	I1126 20:47:27.740177       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.048µs"
	I1126 20:47:31.731211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.359µs"
	I1126 20:47:32.737364       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.08µs"
	I1126 20:47:33.735944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.375µs"
	I1126 20:47:42.764584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.97µs"
	I1126 20:47:47.160187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.602158ms"
	I1126 20:47:47.160299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.161µs"
	I1126 20:47:52.181257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.486µs"
	
	
	==> kube-proxy [572c2c85ed7d71acf7cd0c767201ce638ca7e6d276cc20883a0484e7f244d60c] <==
	I1126 20:47:10.340845       1 server_others.go:69] "Using iptables proxy"
	I1126 20:47:10.367102       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1126 20:47:10.418579       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:47:10.434425       1 server_others.go:152] "Using iptables Proxier"
	I1126 20:47:10.441194       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1126 20:47:10.441216       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1126 20:47:10.441247       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1126 20:47:10.441488       1 server.go:846] "Version info" version="v1.28.0"
	I1126 20:47:10.441498       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:47:10.442557       1 config.go:188] "Starting service config controller"
	I1126 20:47:10.442583       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1126 20:47:10.442603       1 config.go:97] "Starting endpoint slice config controller"
	I1126 20:47:10.442606       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1126 20:47:10.444904       1 config.go:315] "Starting node config controller"
	I1126 20:47:10.444916       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1126 20:47:10.542936       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1126 20:47:10.542984       1 shared_informer.go:318] Caches are synced for service config
	I1126 20:47:10.545260       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f3a99a92a571f35772e62af2f45fb5484878af13d4e0ff35e1338a2d989b68d4] <==
	I1126 20:47:07.273509       1 serving.go:348] Generated self-signed cert in-memory
	W1126 20:47:09.037378       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:47:09.037481       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:47:09.037515       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:47:09.037557       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:47:09.172185       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1126 20:47:09.172227       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:47:09.177416       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1126 20:47:09.177594       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:47:09.177636       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1126 20:47:09.177697       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1126 20:47:09.278185       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 26 20:47:21 old-k8s-version-264537 kubelet[789]: I1126 20:47:21.850588     789 topology_manager.go:215] "Topology Admit Handler" podUID="88b5eb99-bcb6-4aae-b2a8-afb053c2093c" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-zpz9j"
	Nov 26 20:47:21 old-k8s-version-264537 kubelet[789]: I1126 20:47:21.855012     789 topology_manager.go:215] "Topology Admit Handler" podUID="a26d05da-1c97-4489-89f6-9461174500e9" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-fwkcq"
	Nov 26 20:47:21 old-k8s-version-264537 kubelet[789]: I1126 20:47:21.903417     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxfcj\" (UniqueName: \"kubernetes.io/projected/88b5eb99-bcb6-4aae-b2a8-afb053c2093c-kube-api-access-wxfcj\") pod \"kubernetes-dashboard-8694d4445c-zpz9j\" (UID: \"88b5eb99-bcb6-4aae-b2a8-afb053c2093c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-zpz9j"
	Nov 26 20:47:21 old-k8s-version-264537 kubelet[789]: I1126 20:47:21.903479     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a26d05da-1c97-4489-89f6-9461174500e9-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-fwkcq\" (UID: \"a26d05da-1c97-4489-89f6-9461174500e9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq"
	Nov 26 20:47:21 old-k8s-version-264537 kubelet[789]: I1126 20:47:21.903511     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxzk\" (UniqueName: \"kubernetes.io/projected/a26d05da-1c97-4489-89f6-9461174500e9-kube-api-access-cjxzk\") pod \"dashboard-metrics-scraper-5f989dc9cf-fwkcq\" (UID: \"a26d05da-1c97-4489-89f6-9461174500e9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq"
	Nov 26 20:47:21 old-k8s-version-264537 kubelet[789]: I1126 20:47:21.903537     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/88b5eb99-bcb6-4aae-b2a8-afb053c2093c-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-zpz9j\" (UID: \"88b5eb99-bcb6-4aae-b2a8-afb053c2093c\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-zpz9j"
	Nov 26 20:47:22 old-k8s-version-264537 kubelet[789]: W1126 20:47:22.183224     789 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a5e16735df4ab067a4027b79e74d7f3e83fb1c35772f6c8d9f346c1a237a8747/crio-999806ebfe5633117192b24d5b68fefd5c9eee4ae59b1181b88458a03dd8a2a7 WatchSource:0}: Error finding container 999806ebfe5633117192b24d5b68fefd5c9eee4ae59b1181b88458a03dd8a2a7: Status 404 returned error can't find the container with id 999806ebfe5633117192b24d5b68fefd5c9eee4ae59b1181b88458a03dd8a2a7
	Nov 26 20:47:31 old-k8s-version-264537 kubelet[789]: I1126 20:47:31.709362     789 scope.go:117] "RemoveContainer" containerID="8103a63ef2fcabfbacabd12941e51bc1c098bd71cb4c9b82798f342ad5cb8f7a"
	Nov 26 20:47:31 old-k8s-version-264537 kubelet[789]: I1126 20:47:31.735948     789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-zpz9j" podStartSLOduration=6.016597476 podCreationTimestamp="2025-11-26 20:47:21 +0000 UTC" firstStartedPulling="2025-11-26 20:47:22.187253223 +0000 UTC m=+18.809561382" lastFinishedPulling="2025-11-26 20:47:26.906537462 +0000 UTC m=+23.528845637" observedRunningTime="2025-11-26 20:47:27.725563318 +0000 UTC m=+24.347871485" watchObservedRunningTime="2025-11-26 20:47:31.735881731 +0000 UTC m=+28.358189889"
	Nov 26 20:47:32 old-k8s-version-264537 kubelet[789]: I1126 20:47:32.713779     789 scope.go:117] "RemoveContainer" containerID="e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554"
	Nov 26 20:47:32 old-k8s-version-264537 kubelet[789]: I1126 20:47:32.714088     789 scope.go:117] "RemoveContainer" containerID="8103a63ef2fcabfbacabd12941e51bc1c098bd71cb4c9b82798f342ad5cb8f7a"
	Nov 26 20:47:32 old-k8s-version-264537 kubelet[789]: E1126 20:47:32.718824     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fwkcq_kubernetes-dashboard(a26d05da-1c97-4489-89f6-9461174500e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq" podUID="a26d05da-1c97-4489-89f6-9461174500e9"
	Nov 26 20:47:33 old-k8s-version-264537 kubelet[789]: I1126 20:47:33.717780     789 scope.go:117] "RemoveContainer" containerID="e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554"
	Nov 26 20:47:33 old-k8s-version-264537 kubelet[789]: E1126 20:47:33.718107     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fwkcq_kubernetes-dashboard(a26d05da-1c97-4489-89f6-9461174500e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq" podUID="a26d05da-1c97-4489-89f6-9461174500e9"
	Nov 26 20:47:40 old-k8s-version-264537 kubelet[789]: I1126 20:47:40.735773     789 scope.go:117] "RemoveContainer" containerID="bbe5e0cd6e0ff7a9722e7413ce8f89636a2abf001545e870532eebd22a93e60e"
	Nov 26 20:47:42 old-k8s-version-264537 kubelet[789]: I1126 20:47:42.157542     789 scope.go:117] "RemoveContainer" containerID="e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554"
	Nov 26 20:47:42 old-k8s-version-264537 kubelet[789]: I1126 20:47:42.744069     789 scope.go:117] "RemoveContainer" containerID="e40d5565b3589c13d9160a5662151db6a3194af44f173ef70818db41648b8554"
	Nov 26 20:47:42 old-k8s-version-264537 kubelet[789]: I1126 20:47:42.744246     789 scope.go:117] "RemoveContainer" containerID="dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06"
	Nov 26 20:47:42 old-k8s-version-264537 kubelet[789]: E1126 20:47:42.744528     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fwkcq_kubernetes-dashboard(a26d05da-1c97-4489-89f6-9461174500e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq" podUID="a26d05da-1c97-4489-89f6-9461174500e9"
	Nov 26 20:47:52 old-k8s-version-264537 kubelet[789]: I1126 20:47:52.158064     789 scope.go:117] "RemoveContainer" containerID="dc08ffa195fa3f67a3256403e4779cb360c1829fb2d6ae2466081c6475105a06"
	Nov 26 20:47:52 old-k8s-version-264537 kubelet[789]: E1126 20:47:52.158374     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fwkcq_kubernetes-dashboard(a26d05da-1c97-4489-89f6-9461174500e9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fwkcq" podUID="a26d05da-1c97-4489-89f6-9461174500e9"
	Nov 26 20:48:01 old-k8s-version-264537 kubelet[789]: I1126 20:48:01.498638     789 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 26 20:48:01 old-k8s-version-264537 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:48:01 old-k8s-version-264537 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:48:01 old-k8s-version-264537 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [6b03c1e9591cf653833abe711757adaa1bdfd1190816fd296d1f0a76357eae13] <==
	2025/11/26 20:47:26 Using namespace: kubernetes-dashboard
	2025/11/26 20:47:26 Using in-cluster config to connect to apiserver
	2025/11/26 20:47:26 Using secret token for csrf signing
	2025/11/26 20:47:26 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:47:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:47:26 Successful initial request to the apiserver, version: v1.28.0
	2025/11/26 20:47:26 Generating JWE encryption key
	2025/11/26 20:47:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:47:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:47:28 Initializing JWE encryption key from synchronized object
	2025/11/26 20:47:28 Creating in-cluster Sidecar client
	2025/11/26 20:47:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:47:28 Serving insecurely on HTTP port: 9090
	2025/11/26 20:47:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:47:26 Starting overwatch
	
	
	==> storage-provisioner [1a32d3b9e1883a1260ba649c81bec9f5cb7ef22f8f4590f95375ae969df1afa3] <==
	I1126 20:47:40.787861       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:47:40.800640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:47:40.800685       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1126 20:47:58.196965       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:47:58.197146       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-264537_a47e6b84-3eee-4f8e-b44f-3f9bca49c9bf!
	I1126 20:47:58.197404       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24d009d6-7643-48bd-8682-d8a75e344fd3", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-264537_a47e6b84-3eee-4f8e-b44f-3f9bca49c9bf became leader
	I1126 20:47:58.297808       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-264537_a47e6b84-3eee-4f8e-b44f-3f9bca49c9bf!
	
	
	==> storage-provisioner [bbe5e0cd6e0ff7a9722e7413ce8f89636a2abf001545e870532eebd22a93e60e] <==
	I1126 20:47:10.275253       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:47:40.283375       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-264537 -n old-k8s-version-264537
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-264537 -n old-k8s-version-264537: exit status 2 (349.579616ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-264537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-956694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-956694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (253.295653ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:49:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-956694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-956694 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-956694 describe deploy/metrics-server -n kube-system: exit status 1 (78.79348ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-956694 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-956694
helpers_test.go:243: (dbg) docker inspect no-preload-956694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b",
	        "Created": "2025-11-26T20:48:11.257955221Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208907,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:48:11.323646322Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/hostname",
	        "HostsPath": "/var/lib/docker/containers/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/hosts",
	        "LogPath": "/var/lib/docker/containers/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b-json.log",
	        "Name": "/no-preload-956694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-956694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-956694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b",
	                "LowerDir": "/var/lib/docker/overlay2/0080b323bab4635def865bc48fab6d44d62fded9322f96dda189563e0aed4165-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0080b323bab4635def865bc48fab6d44d62fded9322f96dda189563e0aed4165/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0080b323bab4635def865bc48fab6d44d62fded9322f96dda189563e0aed4165/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0080b323bab4635def865bc48fab6d44d62fded9322f96dda189563e0aed4165/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-956694",
	                "Source": "/var/lib/docker/volumes/no-preload-956694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-956694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-956694",
	                "name.minikube.sigs.k8s.io": "no-preload-956694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d38c4d91631702f518fa4ac47d84420178c9a3e664aca773a5545932ba1c55f0",
	            "SandboxKey": "/var/run/docker/netns/d38c4d916317",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-956694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:68:72:4a:90:4b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "32516947827eacd2aa341e65200cd5dd0564df7db92f9b17b625c9371ac2deac",
	                    "EndpointID": "9deb83262aad37edf87c0529016ebf5d71be1bd0432dc33fc1c55dd75b10cab0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-956694",
	                        "53e8b694caf6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-956694 -n no-preload-956694
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-956694 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-956694 logs -n 25: (1.116810789s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-235709 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ ssh     │ -p cilium-235709 sudo crio config                                                                                                                                                                                                             │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │                     │
	│ delete  │ -p cilium-235709                                                                                                                                                                                                                              │ cilium-235709             │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p force-systemd-env-274518 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-274518  │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ ssh     │ force-systemd-flag-622960 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-622960 │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ delete  │ -p force-systemd-flag-622960                                                                                                                                                                                                                  │ force-systemd-flag-622960 │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-164741    │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ delete  │ -p force-systemd-env-274518                                                                                                                                                                                                                   │ force-systemd-env-274518  │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p cert-options-207115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:45 UTC │
	│ ssh     │ cert-options-207115 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ ssh     │ -p cert-options-207115 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ delete  │ -p cert-options-207115                                                                                                                                                                                                                        │ cert-options-207115       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-264537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │                     │
	│ stop    │ -p old-k8s-version-264537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-264537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:47 UTC │
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164741    │ jenkins │ v1.37.0 │ 26 Nov 25 20:47 UTC │                     │
	│ image   │ old-k8s-version-264537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ pause   │ -p old-k8s-version-264537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │                     │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                                                                                     │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                                                                                     │ old-k8s-version-264537    │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-956694         │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable metrics-server -p no-preload-956694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-956694         │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:48:10
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:48:10.275656  208605 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:48:10.275904  208605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:48:10.275935  208605 out.go:374] Setting ErrFile to fd 2...
	I1126 20:48:10.275955  208605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:48:10.276368  208605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:48:10.277005  208605 out.go:368] Setting JSON to false
	I1126 20:48:10.277916  208605 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5421,"bootTime":1764184670,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:48:10.278231  208605 start.go:143] virtualization:  
	I1126 20:48:10.282265  208605 out.go:179] * [no-preload-956694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:48:10.286779  208605 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:48:10.287007  208605 notify.go:221] Checking for updates...
	I1126 20:48:10.293162  208605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:48:10.296393  208605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:48:10.299639  208605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:48:10.302823  208605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:48:10.305914  208605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:48:10.309566  208605 config.go:182] Loaded profile config "cert-expiration-164741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:48:10.309697  208605 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:48:10.341351  208605 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:48:10.341478  208605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:48:10.396999  208605 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-26 20:48:10.387144402 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:48:10.397114  208605 docker.go:319] overlay module found
	I1126 20:48:10.400489  208605 out.go:179] * Using the docker driver based on user configuration
	I1126 20:48:10.403570  208605 start.go:309] selected driver: docker
	I1126 20:48:10.403594  208605 start.go:927] validating driver "docker" against <nil>
	I1126 20:48:10.403608  208605 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:48:10.404352  208605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:48:10.463955  208605 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-26 20:48:10.455062893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:48:10.464110  208605 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 20:48:10.464340  208605 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:48:10.467378  208605 out.go:179] * Using Docker driver with root privileges
	I1126 20:48:10.470154  208605 cni.go:84] Creating CNI manager for ""
	I1126 20:48:10.470217  208605 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:48:10.470230  208605 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 20:48:10.470305  208605 start.go:353] cluster config:
	{Name:no-preload-956694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-956694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:48:10.473442  208605 out.go:179] * Starting "no-preload-956694" primary control-plane node in "no-preload-956694" cluster
	I1126 20:48:10.476380  208605 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:48:10.479286  208605 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:48:10.482139  208605 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:48:10.482213  208605 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:48:10.482264  208605 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/config.json ...
	I1126 20:48:10.482293  208605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/config.json: {Name:mkb9cb365bd4185a352fee74b24f377964ca5ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:48:10.482554  208605 cache.go:107] acquiring lock: {Name:mk95258a7fff2b710bb5ace8d787f94d5e958c18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:48:10.482665  208605 cache.go:115] /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1126 20:48:10.482690  208605 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 144.43µs
	I1126 20:48:10.482728  208605 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1126 20:48:10.482756  208605 cache.go:107] acquiring lock: {Name:mk80c64dc315aa97f6c395684f926e222a1c05a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:48:10.482810  208605 cache.go:115] /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1126 20:48:10.482837  208605 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 82.5µs
	I1126 20:48:10.482860  208605 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1126 20:48:10.482886  208605 cache.go:107] acquiring lock: {Name:mk3cecd14b4d9c1df4ef6dfdc15f03f3fe20eff0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:48:10.482937  208605 cache.go:115] /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1126 20:48:10.482964  208605 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 79.119µs
	I1126 20:48:10.482987  208605 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1126 20:48:10.483014  208605 cache.go:107] acquiring lock: {Name:mke40ef3350bdc50917af8f5d81b8fc7e39a0e9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:48:10.483061  208605 cache.go:115] /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1126 20:48:10.483089  208605 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 76.051µs
	I1126 20:48:10.483111  208605 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1126 20:48:10.483138  208605 cache.go:107] acquiring lock: {Name:mkaf5b5b3b0ac32c3181e56d6ebfdf7aac5d3b83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:48:10.483184  208605 cache.go:115] /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1126 20:48:10.483216  208605 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 79.792µs
	I1126 20:48:10.483239  208605 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1126 20:48:10.483264  208605 cache.go:107] acquiring lock: {Name:mkf172ecba0a9fd5911a31f49becbc46c56f9b8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:48:10.483315  208605 cache.go:115] /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1126 20:48:10.483343  208605 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 80.268µs
	I1126 20:48:10.483367  208605 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1126 20:48:10.483418  208605 cache.go:107] acquiring lock: {Name:mk00138cced1f56f97c321b88bf190c6b36b3025 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:48:10.483485  208605 cache.go:115] /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1126 20:48:10.483508  208605 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 91.894µs
	I1126 20:48:10.483530  208605 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1126 20:48:10.483562  208605 cache.go:107] acquiring lock: {Name:mk9993fbfcfda112e17a6a138436a25447f83e37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:48:10.483615  208605 cache.go:115] /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1126 20:48:10.483636  208605 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 76.068µs
	I1126 20:48:10.483665  208605 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1126 20:48:10.483686  208605 cache.go:87] Successfully saved all images to host disk.
	I1126 20:48:10.502639  208605 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:48:10.502661  208605 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:48:10.502681  208605 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:48:10.502711  208605 start.go:360] acquireMachinesLock for no-preload-956694: {Name:mke86ccef68f41faa470c9124bab3372a2d4bf7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:48:10.502818  208605 start.go:364] duration metric: took 87.012µs to acquireMachinesLock for "no-preload-956694"
	I1126 20:48:10.502846  208605 start.go:93] Provisioning new machine with config: &{Name:no-preload-956694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-956694 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:48:10.502916  208605 start.go:125] createHost starting for "" (driver="docker")
	I1126 20:48:10.508208  208605 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:48:10.508447  208605 start.go:159] libmachine.API.Create for "no-preload-956694" (driver="docker")
	I1126 20:48:10.508486  208605 client.go:173] LocalClient.Create starting
	I1126 20:48:10.508562  208605 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem
	I1126 20:48:10.508622  208605 main.go:143] libmachine: Decoding PEM data...
	I1126 20:48:10.508644  208605 main.go:143] libmachine: Parsing certificate...
	I1126 20:48:10.508700  208605 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem
	I1126 20:48:10.508723  208605 main.go:143] libmachine: Decoding PEM data...
	I1126 20:48:10.508735  208605 main.go:143] libmachine: Parsing certificate...
	I1126 20:48:10.509105  208605 cli_runner.go:164] Run: docker network inspect no-preload-956694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:48:10.525603  208605 cli_runner.go:211] docker network inspect no-preload-956694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:48:10.525699  208605 network_create.go:284] running [docker network inspect no-preload-956694] to gather additional debugging logs...
	I1126 20:48:10.525720  208605 cli_runner.go:164] Run: docker network inspect no-preload-956694
	W1126 20:48:10.539773  208605 cli_runner.go:211] docker network inspect no-preload-956694 returned with exit code 1
	I1126 20:48:10.539807  208605 network_create.go:287] error running [docker network inspect no-preload-956694]: docker network inspect no-preload-956694: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-956694 not found
	I1126 20:48:10.539821  208605 network_create.go:289] output of [docker network inspect no-preload-956694]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-956694 not found
	
	** /stderr **
	I1126 20:48:10.539922  208605 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:48:10.555345  208605 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20cb65a83ad5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:26:47:2b:2e:03} reservation:<nil>}
	I1126 20:48:10.555678  208605 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-16105a7ff776 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:75:f6:9d:ad:ac} reservation:<nil>}
	I1126 20:48:10.555992  208605 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f1c69ea9dfa3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b7:bf:8a:44:80} reservation:<nil>}
	I1126 20:48:10.556384  208605 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019dfbc0}
	I1126 20:48:10.556406  208605 network_create.go:124] attempt to create docker network no-preload-956694 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1126 20:48:10.556462  208605 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-956694 no-preload-956694
	I1126 20:48:10.611112  208605 network_create.go:108] docker network no-preload-956694 192.168.76.0/24 created
	I1126 20:48:10.611145  208605 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-956694" container
	I1126 20:48:10.611228  208605 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:48:10.630072  208605 cli_runner.go:164] Run: docker volume create no-preload-956694 --label name.minikube.sigs.k8s.io=no-preload-956694 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:48:10.648675  208605 oci.go:103] Successfully created a docker volume no-preload-956694
	I1126 20:48:10.648766  208605 cli_runner.go:164] Run: docker run --rm --name no-preload-956694-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-956694 --entrypoint /usr/bin/test -v no-preload-956694:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:48:11.188726  208605 oci.go:107] Successfully prepared a docker volume no-preload-956694
	I1126 20:48:11.188802  208605 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1126 20:48:11.188941  208605 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1126 20:48:11.189043  208605 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:48:11.243122  208605 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-956694 --name no-preload-956694 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-956694 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-956694 --network no-preload-956694 --ip 192.168.76.2 --volume no-preload-956694:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:48:11.537246  208605 cli_runner.go:164] Run: docker container inspect no-preload-956694 --format={{.State.Running}}
	I1126 20:48:11.559531  208605 cli_runner.go:164] Run: docker container inspect no-preload-956694 --format={{.State.Status}}
	I1126 20:48:11.581599  208605 cli_runner.go:164] Run: docker exec no-preload-956694 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:48:11.635888  208605 oci.go:144] the created container "no-preload-956694" has a running status.
	I1126 20:48:11.635915  208605 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/no-preload-956694/id_rsa...
	I1126 20:48:12.287349  208605 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-2326/.minikube/machines/no-preload-956694/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:48:12.308948  208605 cli_runner.go:164] Run: docker container inspect no-preload-956694 --format={{.State.Status}}
	I1126 20:48:12.329512  208605 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:48:12.329531  208605 kic_runner.go:114] Args: [docker exec --privileged no-preload-956694 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:48:12.370710  208605 cli_runner.go:164] Run: docker container inspect no-preload-956694 --format={{.State.Status}}
	I1126 20:48:12.388505  208605 machine.go:94] provisionDockerMachine start ...
	I1126 20:48:12.388602  208605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-956694
	I1126 20:48:12.408000  208605 main.go:143] libmachine: Using SSH client type: native
	I1126 20:48:12.408342  208605 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1126 20:48:12.408352  208605 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:48:12.409020  208605 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37388->127.0.0.1:33053: read: connection reset by peer
	I1126 20:48:15.562041  208605 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-956694
	
	I1126 20:48:15.562068  208605 ubuntu.go:182] provisioning hostname "no-preload-956694"
	I1126 20:48:15.562133  208605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-956694
	I1126 20:48:15.580663  208605 main.go:143] libmachine: Using SSH client type: native
	I1126 20:48:15.581001  208605 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1126 20:48:15.581021  208605 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-956694 && echo "no-preload-956694" | sudo tee /etc/hostname
	I1126 20:48:15.739050  208605 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-956694
	
	I1126 20:48:15.739135  208605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-956694
	I1126 20:48:15.756586  208605 main.go:143] libmachine: Using SSH client type: native
	I1126 20:48:15.756922  208605 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1126 20:48:15.756948  208605 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-956694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-956694/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-956694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:48:15.906152  208605 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:48:15.906195  208605 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:48:15.906223  208605 ubuntu.go:190] setting up certificates
	I1126 20:48:15.906234  208605 provision.go:84] configureAuth start
	I1126 20:48:15.906300  208605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-956694
	I1126 20:48:15.923755  208605 provision.go:143] copyHostCerts
	I1126 20:48:15.923820  208605 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:48:15.923829  208605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:48:15.923910  208605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:48:15.924016  208605 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:48:15.924026  208605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:48:15.924057  208605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:48:15.924139  208605 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:48:15.924153  208605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:48:15.924179  208605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:48:15.924228  208605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.no-preload-956694 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-956694]
	I1126 20:48:16.261454  208605 provision.go:177] copyRemoteCerts
	I1126 20:48:16.261520  208605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:48:16.261569  208605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-956694
	I1126 20:48:16.278693  208605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/no-preload-956694/id_rsa Username:docker}
	I1126 20:48:16.381960  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:48:16.399879  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:48:16.417030  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:48:16.434273  208605 provision.go:87] duration metric: took 528.018528ms to configureAuth
	I1126 20:48:16.434309  208605 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:48:16.434495  208605 config.go:182] Loaded profile config "no-preload-956694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:48:16.434603  208605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-956694
	I1126 20:48:16.451609  208605 main.go:143] libmachine: Using SSH client type: native
	I1126 20:48:16.451930  208605 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1126 20:48:16.451948  208605 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:48:16.781309  208605 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:48:16.781330  208605 machine.go:97] duration metric: took 4.392800301s to provisionDockerMachine
	I1126 20:48:16.781342  208605 client.go:176] duration metric: took 6.272845172s to LocalClient.Create
	I1126 20:48:16.781358  208605 start.go:167] duration metric: took 6.272913034s to libmachine.API.Create "no-preload-956694"
	I1126 20:48:16.781369  208605 start.go:293] postStartSetup for "no-preload-956694" (driver="docker")
	I1126 20:48:16.781379  208605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:48:16.781472  208605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:48:16.781526  208605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-956694
	I1126 20:48:16.799566  208605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/no-preload-956694/id_rsa Username:docker}
	I1126 20:48:16.902162  208605 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:48:16.905436  208605 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:48:16.905460  208605 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:48:16.905471  208605 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:48:16.905526  208605 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:48:16.905605  208605 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:48:16.905707  208605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:48:16.913235  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:48:16.930957  208605 start.go:296] duration metric: took 149.574753ms for postStartSetup
	I1126 20:48:16.931310  208605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-956694
	I1126 20:48:16.949016  208605 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/config.json ...
	I1126 20:48:16.949356  208605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:48:16.949412  208605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-956694
	I1126 20:48:16.967603  208605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/no-preload-956694/id_rsa Username:docker}
	I1126 20:48:17.070906  208605 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:48:17.075366  208605 start.go:128] duration metric: took 6.572436369s to createHost
	I1126 20:48:17.075395  208605 start.go:83] releasing machines lock for "no-preload-956694", held for 6.572563396s
	I1126 20:48:17.075471  208605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-956694
	I1126 20:48:17.094137  208605 ssh_runner.go:195] Run: cat /version.json
	I1126 20:48:17.094199  208605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-956694
	I1126 20:48:17.094210  208605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:48:17.094289  208605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-956694
	I1126 20:48:17.112647  208605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/no-preload-956694/id_rsa Username:docker}
	I1126 20:48:17.122063  208605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/no-preload-956694/id_rsa Username:docker}
	I1126 20:48:17.218145  208605 ssh_runner.go:195] Run: systemctl --version
	I1126 20:48:17.322795  208605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:48:17.358459  208605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:48:17.362800  208605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:48:17.362898  208605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:48:17.392807  208605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1126 20:48:17.392833  208605 start.go:496] detecting cgroup driver to use...
	I1126 20:48:17.392867  208605 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:48:17.392922  208605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:48:17.411096  208605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:48:17.423986  208605 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:48:17.424093  208605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:48:17.441591  208605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:48:17.459065  208605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:48:17.569659  208605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:48:17.695869  208605 docker.go:234] disabling docker service ...
	I1126 20:48:17.695977  208605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:48:17.717207  208605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:48:17.730661  208605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:48:17.849425  208605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:48:17.970053  208605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:48:17.982180  208605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:48:17.996529  208605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:48:17.996599  208605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:18.005364  208605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:48:18.005436  208605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:18.015878  208605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:18.025943  208605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:18.035291  208605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:48:18.043427  208605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:18.052316  208605 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:18.066052  208605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:48:18.075134  208605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:48:18.082963  208605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:48:18.090661  208605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:48:18.196378  208605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:48:18.365040  208605 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:48:18.365170  208605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:48:18.369374  208605 start.go:564] Will wait 60s for crictl version
	I1126 20:48:18.369493  208605 ssh_runner.go:195] Run: which crictl
	I1126 20:48:18.372917  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:48:18.395888  208605 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:48:18.396031  208605 ssh_runner.go:195] Run: crio --version
	I1126 20:48:18.424323  208605 ssh_runner.go:195] Run: crio --version
	I1126 20:48:18.456565  208605 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:48:18.459338  208605 cli_runner.go:164] Run: docker network inspect no-preload-956694 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:48:18.475231  208605 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1126 20:48:18.479160  208605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:48:18.489226  208605 kubeadm.go:884] updating cluster {Name:no-preload-956694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-956694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:48:18.489339  208605 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:48:18.489384  208605 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:48:18.511945  208605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1126 20:48:18.511971  208605 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1126 20:48:18.512020  208605 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:48:18.512231  208605 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1126 20:48:18.512328  208605 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1126 20:48:18.512417  208605 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1126 20:48:18.512505  208605 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1126 20:48:18.512603  208605 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1126 20:48:18.512694  208605 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1126 20:48:18.512777  208605 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1126 20:48:18.515503  208605 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1126 20:48:18.515617  208605 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1126 20:48:18.515830  208605 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1126 20:48:18.515876  208605 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1126 20:48:18.515936  208605 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:48:18.516099  208605 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1126 20:48:18.516155  208605 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1126 20:48:18.515505  208605 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1126 20:48:18.860048  208605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1126 20:48:18.867861  208605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1126 20:48:18.881571  208605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1126 20:48:18.890051  208605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1126 20:48:18.910229  208605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1126 20:48:18.913783  208605 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1126 20:48:18.913834  208605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1126 20:48:18.913889  208605 ssh_runner.go:195] Run: which crictl
	I1126 20:48:18.945393  208605 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1126 20:48:18.945438  208605 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1126 20:48:18.945495  208605 ssh_runner.go:195] Run: which crictl
	I1126 20:48:18.959038  208605 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1126 20:48:18.959084  208605 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1126 20:48:18.959140  208605 ssh_runner.go:195] Run: which crictl
	I1126 20:48:18.959671  208605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1126 20:48:18.981551  208605 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1126 20:48:18.981594  208605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1126 20:48:18.981663  208605 ssh_runner.go:195] Run: which crictl
	I1126 20:48:18.990628  208605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1126 20:48:19.030678  208605 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1126 20:48:19.030716  208605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1126 20:48:19.030790  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1126 20:48:19.030907  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1126 20:48:19.030986  208605 ssh_runner.go:195] Run: which crictl
	I1126 20:48:19.031026  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1126 20:48:19.048342  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1126 20:48:19.048400  208605 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1126 20:48:19.048515  208605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1126 20:48:19.048559  208605 ssh_runner.go:195] Run: which crictl
	I1126 20:48:19.055311  208605 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1126 20:48:19.055364  208605 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1126 20:48:19.055409  208605 ssh_runner.go:195] Run: which crictl
	I1126 20:48:19.099244  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1126 20:48:19.105216  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1126 20:48:19.105326  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1126 20:48:19.105419  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1126 20:48:19.105639  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1126 20:48:19.105690  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1126 20:48:19.105735  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1126 20:48:19.176063  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1126 20:48:19.225341  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1126 20:48:19.225438  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1126 20:48:19.225500  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1126 20:48:19.225545  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1126 20:48:19.225583  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1126 20:48:19.230870  208605 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1126 20:48:19.230965  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1126 20:48:19.231121  208605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1126 20:48:19.313262  208605 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1126 20:48:19.313369  208605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1126 20:48:19.313476  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1126 20:48:19.313582  208605 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1126 20:48:19.313659  208605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1126 20:48:19.313701  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1126 20:48:19.313740  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1126 20:48:19.323770  208605 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1126 20:48:19.323944  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1126 20:48:19.323881  208605 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1126 20:48:19.324149  208605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1126 20:48:19.410842  208605 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1126 20:48:19.410884  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1126 20:48:19.410947  208605 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1126 20:48:19.411025  208605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1126 20:48:19.411095  208605 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1126 20:48:19.411143  208605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1126 20:48:19.411198  208605 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1126 20:48:19.411219  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1126 20:48:19.411280  208605 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1126 20:48:19.411332  208605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1126 20:48:19.411390  208605 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1126 20:48:19.411402  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1126 20:48:19.453408  208605 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1126 20:48:19.453494  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1126 20:48:19.453579  208605 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1126 20:48:19.453613  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1126 20:48:19.454972  208605 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1126 20:48:19.455050  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1126 20:48:19.485423  208605 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1126 20:48:19.485547  208605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1126 20:48:19.660515  208605 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1126 20:48:19.660797  208605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:48:20.008426  208605 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1126 20:48:20.008496  208605 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:48:20.008589  208605 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1126 20:48:20.008629  208605 ssh_runner.go:195] Run: which crictl
	I1126 20:48:20.099352  208605 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1126 20:48:20.099425  208605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1126 20:48:20.104487  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:48:21.879915  208605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.780462985s)
	I1126 20:48:21.879944  208605 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1126 20:48:21.879962  208605 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1126 20:48:21.880014  208605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1126 20:48:21.880106  208605 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.775543249s)
	I1126 20:48:21.880144  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:48:21.918855  208605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:48:23.498583  208605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.618543833s)
	I1126 20:48:23.498609  208605 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1126 20:48:23.498641  208605 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1126 20:48:23.498694  208605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1126 20:48:23.498772  208605 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.57989763s)
	I1126 20:48:23.498797  208605 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1126 20:48:23.498857  208605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1126 20:48:24.618525  208605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.119812206s)
	I1126 20:48:24.618555  208605 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1126 20:48:24.618574  208605 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1126 20:48:24.618620  208605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1126 20:48:24.618722  208605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.119855659s)
	I1126 20:48:24.618742  208605 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1126 20:48:24.618761  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1126 20:48:25.924760  208605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.306112871s)
	I1126 20:48:25.924784  208605 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1126 20:48:25.924800  208605 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1126 20:48:25.924847  208605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1126 20:48:27.314231  208605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.389360297s)
	I1126 20:48:27.314262  208605 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1126 20:48:27.314281  208605 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1126 20:48:27.314330  208605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1126 20:48:30.931047  208605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.616691912s)
	I1126 20:48:30.931077  208605 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1126 20:48:30.931096  208605 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1126 20:48:30.931142  208605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1126 20:48:31.532836  208605 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21974-2326/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1126 20:48:31.532868  208605 cache_images.go:125] Successfully loaded all cached images
	I1126 20:48:31.532874  208605 cache_images.go:94] duration metric: took 13.020890436s to LoadCachedImages
	I1126 20:48:31.532886  208605 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1126 20:48:31.532984  208605 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-956694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-956694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:48:31.533066  208605 ssh_runner.go:195] Run: crio config
	I1126 20:48:31.584070  208605 cni.go:84] Creating CNI manager for ""
	I1126 20:48:31.584095  208605 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:48:31.584109  208605 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:48:31.584131  208605 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-956694 NodeName:no-preload-956694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:48:31.584258  208605 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-956694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:48:31.584337  208605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:48:31.592967  208605 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1126 20:48:31.593034  208605 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1126 20:48:31.600700  208605 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1126 20:48:31.600724  208605 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256
	I1126 20:48:31.600771  208605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:48:31.600790  208605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1126 20:48:31.600701  208605 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256
	I1126 20:48:31.600857  208605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1126 20:48:31.605561  208605 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1126 20:48:31.605592  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1126 20:48:31.620722  208605 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1126 20:48:31.620759  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1126 20:48:31.620874  208605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1126 20:48:31.641547  208605 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1126 20:48:31.641584  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1126 20:48:32.484521  208605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:48:32.493036  208605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1126 20:48:32.506672  208605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:48:32.520093  208605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1126 20:48:32.534934  208605 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:48:32.539039  208605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:48:32.548755  208605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:48:32.664242  208605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:48:32.680764  208605 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694 for IP: 192.168.76.2
	I1126 20:48:32.680784  208605 certs.go:195] generating shared ca certs ...
	I1126 20:48:32.680799  208605 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:48:32.680927  208605 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:48:32.680969  208605 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:48:32.680976  208605 certs.go:257] generating profile certs ...
	I1126 20:48:32.681035  208605 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.key
	I1126 20:48:32.681046  208605 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt with IP's: []
	I1126 20:48:32.763072  208605 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt ...
	I1126 20:48:32.763104  208605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: {Name:mk55e2f53b306b561577f4aff8625254fa315876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:48:32.763336  208605 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.key ...
	I1126 20:48:32.763350  208605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.key: {Name:mkd487bab77298078f02ec83460cf69382f7e04a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:48:32.763446  208605 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/apiserver.key.fd2415c9
	I1126 20:48:32.763462  208605 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/apiserver.crt.fd2415c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1126 20:48:33.048264  208605 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/apiserver.crt.fd2415c9 ...
	I1126 20:48:33.048298  208605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/apiserver.crt.fd2415c9: {Name:mk32e7bec155003c17f58fd351fe3047bd60652b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:48:33.048490  208605 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/apiserver.key.fd2415c9 ...
	I1126 20:48:33.048510  208605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/apiserver.key.fd2415c9: {Name:mke659b5c224f5e1a22f9f8b26e313d19bfa8a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:48:33.048600  208605 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/apiserver.crt.fd2415c9 -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/apiserver.crt
	I1126 20:48:33.048682  208605 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/apiserver.key.fd2415c9 -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/apiserver.key
	I1126 20:48:33.048749  208605 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/proxy-client.key
	I1126 20:48:33.048771  208605 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/proxy-client.crt with IP's: []
	I1126 20:48:33.213799  208605 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/proxy-client.crt ...
	I1126 20:48:33.213832  208605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/proxy-client.crt: {Name:mk24c07724dfe5e67bdce03465a515ef75089812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:48:33.214022  208605 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/proxy-client.key ...
	I1126 20:48:33.214040  208605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/proxy-client.key: {Name:mke46ab08f86e84c3ee8234863b2ff87e4a4b409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:48:33.214228  208605 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:48:33.214282  208605 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:48:33.214301  208605 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:48:33.214335  208605 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:48:33.214362  208605 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:48:33.214397  208605 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:48:33.214445  208605 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:48:33.215022  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:48:33.233896  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:48:33.251594  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:48:33.269462  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:48:33.286687  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1126 20:48:33.304152  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1126 20:48:33.322110  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:48:33.344536  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:48:33.361911  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:48:33.379175  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:48:33.396409  208605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:48:33.413397  208605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:48:33.425650  208605 ssh_runner.go:195] Run: openssl version
	I1126 20:48:33.434169  208605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:48:33.443140  208605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:48:33.446809  208605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:48:33.446915  208605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:48:33.489221  208605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:48:33.498091  208605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:48:33.507118  208605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:48:33.511552  208605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:48:33.511626  208605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:48:33.552953  208605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:48:33.562153  208605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:48:33.570760  208605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:48:33.574559  208605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:48:33.574643  208605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:48:33.616072  208605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:48:33.624812  208605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:48:33.628859  208605 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:48:33.628969  208605 kubeadm.go:401] StartCluster: {Name:no-preload-956694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-956694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:48:33.629052  208605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:48:33.629125  208605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:48:33.657132  208605 cri.go:89] found id: ""
	I1126 20:48:33.657208  208605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:48:33.665256  208605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:48:33.672941  208605 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:48:33.673007  208605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:48:33.681095  208605 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:48:33.681167  208605 kubeadm.go:158] found existing configuration files:
	
	I1126 20:48:33.681235  208605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:48:33.689075  208605 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:48:33.689186  208605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:48:33.696788  208605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:48:33.704393  208605 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:48:33.704484  208605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:48:33.712180  208605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:48:33.719855  208605 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:48:33.719934  208605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:48:33.727491  208605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:48:33.735169  208605 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:48:33.735246  208605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:48:33.742574  208605 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:48:33.782353  208605 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 20:48:33.782587  208605 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:48:33.805645  208605 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:48:33.805767  208605 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1126 20:48:33.805841  208605 kubeadm.go:319] OS: Linux
	I1126 20:48:33.805904  208605 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:48:33.806023  208605 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1126 20:48:33.806112  208605 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:48:33.806183  208605 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:48:33.806250  208605 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:48:33.806329  208605 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:48:33.806401  208605 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:48:33.806475  208605 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:48:33.806558  208605 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1126 20:48:33.875163  208605 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:48:33.875318  208605 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:48:33.875447  208605 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 20:48:33.894989  208605 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:48:33.902374  208605 out.go:252]   - Generating certificates and keys ...
	I1126 20:48:33.902500  208605 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:48:33.902614  208605 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:48:34.464898  208605 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:48:34.847380  208605 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:48:35.438369  208605 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:48:36.237069  208605 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:48:36.670236  208605 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:48:36.670619  208605 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-956694] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1126 20:48:38.159503  208605 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:48:38.159862  208605 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-956694] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1126 20:48:39.190579  208605 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:48:39.629330  208605 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:48:39.681588  208605 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:48:39.681953  208605 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:48:40.771076  208605 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:48:40.959890  208605 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 20:48:41.043398  208605 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:48:42.035065  208605 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:48:42.137766  208605 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:48:42.151666  208605 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:48:42.151751  208605 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1126 20:48:42.161250  208605 out.go:252]   - Booting up control plane ...
	I1126 20:48:42.161479  208605 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:48:42.161588  208605 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:48:42.161707  208605 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:48:42.181541  208605 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:48:42.181673  208605 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 20:48:42.194630  208605 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 20:48:42.194742  208605 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:48:42.194789  208605 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:48:42.376637  208605 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 20:48:42.376765  208605 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 20:48:43.378026  208605 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001543764s
	I1126 20:48:43.381736  208605 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 20:48:43.381846  208605 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1126 20:48:43.381962  208605 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 20:48:43.382047  208605 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 20:48:45.894863  208605 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.51266276s
	I1126 20:48:48.878693  208605 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.496996163s
	I1126 20:48:49.383281  208605 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001537084s
	I1126 20:48:49.403999  208605 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:48:49.441838  208605 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:48:49.457748  208605 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:48:49.457971  208605 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-956694 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:48:49.470573  208605 kubeadm.go:319] [bootstrap-token] Using token: 296p3e.kym09io3ae3zds6j
	I1126 20:48:49.473610  208605 out.go:252]   - Configuring RBAC rules ...
	I1126 20:48:49.473730  208605 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:48:49.481826  208605 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:48:49.491212  208605 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:48:49.496441  208605 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:48:49.503495  208605 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:48:49.510189  208605 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:48:49.793698  208605 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:48:50.243761  208605 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:48:50.789147  208605 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:48:50.791246  208605 kubeadm.go:319] 
	I1126 20:48:50.791378  208605 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:48:50.791393  208605 kubeadm.go:319] 
	I1126 20:48:50.791477  208605 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:48:50.791483  208605 kubeadm.go:319] 
	I1126 20:48:50.791508  208605 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:48:50.791566  208605 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:48:50.791617  208605 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:48:50.791621  208605 kubeadm.go:319] 
	I1126 20:48:50.791679  208605 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:48:50.791683  208605 kubeadm.go:319] 
	I1126 20:48:50.791730  208605 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:48:50.791734  208605 kubeadm.go:319] 
	I1126 20:48:50.791786  208605 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:48:50.791861  208605 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:48:50.791929  208605 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:48:50.791933  208605 kubeadm.go:319] 
	I1126 20:48:50.792017  208605 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:48:50.792094  208605 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:48:50.792098  208605 kubeadm.go:319] 
	I1126 20:48:50.792182  208605 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 296p3e.kym09io3ae3zds6j \
	I1126 20:48:50.792286  208605 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a \
	I1126 20:48:50.792306  208605 kubeadm.go:319] 	--control-plane 
	I1126 20:48:50.792309  208605 kubeadm.go:319] 
	I1126 20:48:50.792393  208605 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:48:50.792397  208605 kubeadm.go:319] 
	I1126 20:48:50.792479  208605 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 296p3e.kym09io3ae3zds6j \
	I1126 20:48:50.792581  208605 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a 
	I1126 20:48:50.794908  208605 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1126 20:48:50.795148  208605 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1126 20:48:50.795268  208605 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 20:48:50.795293  208605 cni.go:84] Creating CNI manager for ""
	I1126 20:48:50.795300  208605 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:48:50.798246  208605 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1126 20:48:50.801326  208605 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 20:48:50.805073  208605 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 20:48:50.805093  208605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 20:48:50.818081  208605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:48:51.113265  208605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:48:51.113396  208605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:48:51.113466  208605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-956694 minikube.k8s.io/updated_at=2025_11_26T20_48_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=no-preload-956694 minikube.k8s.io/primary=true
	I1126 20:48:51.342390  208605 ops.go:34] apiserver oom_adj: -16
	I1126 20:48:51.342498  208605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:48:51.842950  208605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:48:52.343565  208605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:48:52.842712  208605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:48:53.342603  208605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:48:53.842775  208605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:48:54.342615  208605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:48:54.843483  208605 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:48:54.929064  208605 kubeadm.go:1114] duration metric: took 3.815716622s to wait for elevateKubeSystemPrivileges
	I1126 20:48:54.929093  208605 kubeadm.go:403] duration metric: took 21.30013014s to StartCluster
	I1126 20:48:54.929109  208605 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:48:54.929182  208605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:48:54.930149  208605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:48:54.930373  208605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:48:54.930388  208605 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:48:54.930619  208605 config.go:182] Loaded profile config "no-preload-956694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:48:54.930664  208605 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:48:54.930725  208605 addons.go:70] Setting storage-provisioner=true in profile "no-preload-956694"
	I1126 20:48:54.930743  208605 addons.go:239] Setting addon storage-provisioner=true in "no-preload-956694"
	I1126 20:48:54.930770  208605 host.go:66] Checking if "no-preload-956694" exists ...
	I1126 20:48:54.931231  208605 cli_runner.go:164] Run: docker container inspect no-preload-956694 --format={{.State.Status}}
	I1126 20:48:54.931697  208605 addons.go:70] Setting default-storageclass=true in profile "no-preload-956694"
	I1126 20:48:54.931719  208605 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-956694"
	I1126 20:48:54.931979  208605 cli_runner.go:164] Run: docker container inspect no-preload-956694 --format={{.State.Status}}
	I1126 20:48:54.933415  208605 out.go:179] * Verifying Kubernetes components...
	I1126 20:48:54.937234  208605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:48:54.959922  208605 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:48:54.965124  208605 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:48:54.965148  208605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:48:54.965214  208605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-956694
	I1126 20:48:54.971294  208605 addons.go:239] Setting addon default-storageclass=true in "no-preload-956694"
	I1126 20:48:54.971334  208605 host.go:66] Checking if "no-preload-956694" exists ...
	I1126 20:48:54.971751  208605 cli_runner.go:164] Run: docker container inspect no-preload-956694 --format={{.State.Status}}
	I1126 20:48:55.000165  208605 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:48:55.000192  208605 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:48:55.000253  208605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-956694
	I1126 20:48:55.007251  208605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/no-preload-956694/id_rsa Username:docker}
	I1126 20:48:55.036500  208605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/no-preload-956694/id_rsa Username:docker}
	I1126 20:48:55.282704  208605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:48:55.285771  208605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:48:55.372367  208605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:48:55.409250  208605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:48:55.861413  208605 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1126 20:48:55.863722  208605 node_ready.go:35] waiting up to 6m0s for node "no-preload-956694" to be "Ready" ...
	I1126 20:48:56.366519  208605 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-956694" context rescaled to 1 replicas
	I1126 20:48:56.477477  208605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.068143964s)
	I1126 20:48:56.477799  208605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.105410053s)
	I1126 20:48:56.494397  208605 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1126 20:48:56.497329  208605 addons.go:530] duration metric: took 1.566653088s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1126 20:48:57.867382  208605 node_ready.go:57] node "no-preload-956694" has "Ready":"False" status (will retry)
	W1126 20:49:00.367684  208605 node_ready.go:57] node "no-preload-956694" has "Ready":"False" status (will retry)
	W1126 20:49:02.867186  208605 node_ready.go:57] node "no-preload-956694" has "Ready":"False" status (will retry)
	W1126 20:49:05.367027  208605 node_ready.go:57] node "no-preload-956694" has "Ready":"False" status (will retry)
	W1126 20:49:07.367318  208605 node_ready.go:57] node "no-preload-956694" has "Ready":"False" status (will retry)
	I1126 20:49:09.371046  208605 node_ready.go:49] node "no-preload-956694" is "Ready"
	I1126 20:49:09.371071  208605 node_ready.go:38] duration metric: took 13.507278898s for node "no-preload-956694" to be "Ready" ...
	I1126 20:49:09.371084  208605 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:49:09.371138  208605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:49:09.387134  208605 api_server.go:72] duration metric: took 14.456717787s to wait for apiserver process to appear ...
	I1126 20:49:09.387157  208605 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:49:09.387176  208605 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:49:09.406425  208605 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1126 20:49:09.409392  208605 api_server.go:141] control plane version: v1.34.1
	I1126 20:49:09.409417  208605 api_server.go:131] duration metric: took 22.253612ms to wait for apiserver health ...
	I1126 20:49:09.409426  208605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:49:09.414119  208605 system_pods.go:59] 8 kube-system pods found
	I1126 20:49:09.414152  208605 system_pods.go:61] "coredns-66bc5c9577-4z56c" [adf50d03-764a-47f2-8b7b-85682915bd69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:49:09.414159  208605 system_pods.go:61] "etcd-no-preload-956694" [30a458d9-4cc3-4efc-ac03-1c12fd3467b2] Running
	I1126 20:49:09.414164  208605 system_pods.go:61] "kindnet-dfdbx" [68b183f2-571b-476a-924c-7b0a22cfe302] Running
	I1126 20:49:09.414168  208605 system_pods.go:61] "kube-apiserver-no-preload-956694" [19dfb0a5-0634-42eb-b9a2-44bf5665b3ec] Running
	I1126 20:49:09.414172  208605 system_pods.go:61] "kube-controller-manager-no-preload-956694" [56618fe0-6b76-493c-986e-3acf20cc0c46] Running
	I1126 20:49:09.414178  208605 system_pods.go:61] "kube-proxy-2j4dg" [c799d69f-b86f-4ef0-82b2-0b4200f9164f] Running
	I1126 20:49:09.414181  208605 system_pods.go:61] "kube-scheduler-no-preload-956694" [07469dd8-7c87-4bea-8dda-a24815aa6db1] Running
	I1126 20:49:09.414186  208605 system_pods.go:61] "storage-provisioner" [c37b32d0-5da0-4557-91cf-d1d082be9471] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:49:09.414193  208605 system_pods.go:74] duration metric: took 4.760868ms to wait for pod list to return data ...
	I1126 20:49:09.414202  208605 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:49:09.418766  208605 default_sa.go:45] found service account: "default"
	I1126 20:49:09.418788  208605 default_sa.go:55] duration metric: took 4.580927ms for default service account to be created ...
	I1126 20:49:09.418798  208605 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:49:09.424791  208605 system_pods.go:86] 8 kube-system pods found
	I1126 20:49:09.424821  208605 system_pods.go:89] "coredns-66bc5c9577-4z56c" [adf50d03-764a-47f2-8b7b-85682915bd69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:49:09.424828  208605 system_pods.go:89] "etcd-no-preload-956694" [30a458d9-4cc3-4efc-ac03-1c12fd3467b2] Running
	I1126 20:49:09.424834  208605 system_pods.go:89] "kindnet-dfdbx" [68b183f2-571b-476a-924c-7b0a22cfe302] Running
	I1126 20:49:09.424838  208605 system_pods.go:89] "kube-apiserver-no-preload-956694" [19dfb0a5-0634-42eb-b9a2-44bf5665b3ec] Running
	I1126 20:49:09.424842  208605 system_pods.go:89] "kube-controller-manager-no-preload-956694" [56618fe0-6b76-493c-986e-3acf20cc0c46] Running
	I1126 20:49:09.424846  208605 system_pods.go:89] "kube-proxy-2j4dg" [c799d69f-b86f-4ef0-82b2-0b4200f9164f] Running
	I1126 20:49:09.424850  208605 system_pods.go:89] "kube-scheduler-no-preload-956694" [07469dd8-7c87-4bea-8dda-a24815aa6db1] Running
	I1126 20:49:09.424853  208605 system_pods.go:89] "storage-provisioner" [c37b32d0-5da0-4557-91cf-d1d082be9471] Running
	I1126 20:49:09.424860  208605 system_pods.go:126] duration metric: took 6.056508ms to wait for k8s-apps to be running ...
	I1126 20:49:09.424867  208605 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:49:09.424920  208605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:49:09.438878  208605 system_svc.go:56] duration metric: took 14.001678ms WaitForService to wait for kubelet
	I1126 20:49:09.438904  208605 kubeadm.go:587] duration metric: took 14.50849186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:49:09.438921  208605 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:49:09.442145  208605 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:49:09.442173  208605 node_conditions.go:123] node cpu capacity is 2
	I1126 20:49:09.442185  208605 node_conditions.go:105] duration metric: took 3.25932ms to run NodePressure ...
	I1126 20:49:09.442198  208605 start.go:242] waiting for startup goroutines ...
	I1126 20:49:09.442205  208605 start.go:247] waiting for cluster config update ...
	I1126 20:49:09.442216  208605 start.go:256] writing updated cluster config ...
	I1126 20:49:09.442497  208605 ssh_runner.go:195] Run: rm -f paused
	I1126 20:49:09.446615  208605 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:49:09.450247  208605 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4z56c" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:10.457116  208605 pod_ready.go:94] pod "coredns-66bc5c9577-4z56c" is "Ready"
	I1126 20:49:10.457152  208605 pod_ready.go:86] duration metric: took 1.006841739s for pod "coredns-66bc5c9577-4z56c" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:10.461386  208605 pod_ready.go:83] waiting for pod "etcd-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:10.466972  208605 pod_ready.go:94] pod "etcd-no-preload-956694" is "Ready"
	I1126 20:49:10.467047  208605 pod_ready.go:86] duration metric: took 5.584761ms for pod "etcd-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:10.469295  208605 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:10.473719  208605 pod_ready.go:94] pod "kube-apiserver-no-preload-956694" is "Ready"
	I1126 20:49:10.473745  208605 pod_ready.go:86] duration metric: took 4.372901ms for pod "kube-apiserver-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:10.475924  208605 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:10.654545  208605 pod_ready.go:94] pod "kube-controller-manager-no-preload-956694" is "Ready"
	I1126 20:49:10.654575  208605 pod_ready.go:86] duration metric: took 178.627525ms for pod "kube-controller-manager-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:10.858314  208605 pod_ready.go:83] waiting for pod "kube-proxy-2j4dg" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:11.254352  208605 pod_ready.go:94] pod "kube-proxy-2j4dg" is "Ready"
	I1126 20:49:11.254380  208605 pod_ready.go:86] duration metric: took 396.037379ms for pod "kube-proxy-2j4dg" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:11.455130  208605 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:11.856693  208605 pod_ready.go:94] pod "kube-scheduler-no-preload-956694" is "Ready"
	I1126 20:49:11.856728  208605 pod_ready.go:86] duration metric: took 401.571017ms for pod "kube-scheduler-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:11.856742  208605 pod_ready.go:40] duration metric: took 2.410050824s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:49:11.920986  208605 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 20:49:11.924351  208605 out.go:179] * Done! kubectl is now configured to use "no-preload-956694" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 20:49:09 no-preload-956694 crio[838]: time="2025-11-26T20:49:09.325510144Z" level=info msg="Created container 5804e2ef6d9ce974c72219abcf71e5ac7f1d71f9f0f5d01222d0f25ab56e80b7: kube-system/coredns-66bc5c9577-4z56c/coredns" id=bc90086c-5234-42aa-9caa-e06b4fff2b3e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:49:09 no-preload-956694 crio[838]: time="2025-11-26T20:49:09.326414649Z" level=info msg="Starting container: 5804e2ef6d9ce974c72219abcf71e5ac7f1d71f9f0f5d01222d0f25ab56e80b7" id=995907dd-367e-4e8d-8a46-843af220ab7f name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:49:09 no-preload-956694 crio[838]: time="2025-11-26T20:49:09.3284462Z" level=info msg="Started container" PID=2481 containerID=5804e2ef6d9ce974c72219abcf71e5ac7f1d71f9f0f5d01222d0f25ab56e80b7 description=kube-system/coredns-66bc5c9577-4z56c/coredns id=995907dd-367e-4e8d-8a46-843af220ab7f name=/runtime.v1.RuntimeService/StartContainer sandboxID=521c2b018f5fb3abe5570c4675fc385c6778b7013b68190183427b011ac61b70
	Nov 26 20:49:12 no-preload-956694 crio[838]: time="2025-11-26T20:49:12.448178096Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a5afc6b5-d4ff-41f4-8251-09ef00fdde59 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:49:12 no-preload-956694 crio[838]: time="2025-11-26T20:49:12.448703117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:49:12 no-preload-956694 crio[838]: time="2025-11-26T20:49:12.453823662Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:25d44104f5abc8c7afe7f3e9a06544ff165efc28da621992002b0344c7523c80 UID:b82900f4-b9ca-4f50-9ac6-95bb86374236 NetNS:/var/run/netns/431a2ac8-4f58-4914-bd80-6f208c37e38d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400274eca0}] Aliases:map[]}"
	Nov 26 20:49:12 no-preload-956694 crio[838]: time="2025-11-26T20:49:12.453863678Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 26 20:49:12 no-preload-956694 crio[838]: time="2025-11-26T20:49:12.463583053Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:25d44104f5abc8c7afe7f3e9a06544ff165efc28da621992002b0344c7523c80 UID:b82900f4-b9ca-4f50-9ac6-95bb86374236 NetNS:/var/run/netns/431a2ac8-4f58-4914-bd80-6f208c37e38d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400274eca0}] Aliases:map[]}"
	Nov 26 20:49:12 no-preload-956694 crio[838]: time="2025-11-26T20:49:12.463722667Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 26 20:49:12 no-preload-956694 crio[838]: time="2025-11-26T20:49:12.467323022Z" level=info msg="Ran pod sandbox 25d44104f5abc8c7afe7f3e9a06544ff165efc28da621992002b0344c7523c80 with infra container: default/busybox/POD" id=a5afc6b5-d4ff-41f4-8251-09ef00fdde59 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:49:12 no-preload-956694 crio[838]: time="2025-11-26T20:49:12.468493636Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fd26a47b-b371-41f9-a905-b3854237272d name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:49:12 no-preload-956694 crio[838]: time="2025-11-26T20:49:12.468614083Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=fd26a47b-b371-41f9-a905-b3854237272d name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:49:12 no-preload-956694 crio[838]: time="2025-11-26T20:49:12.468666299Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=fd26a47b-b371-41f9-a905-b3854237272d name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:49:12 no-preload-956694 crio[838]: time="2025-11-26T20:49:12.46954166Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=59ae3bfa-4b03-47f0-a0c4-a581bd4f1db8 name=/runtime.v1.ImageService/PullImage
	Nov 26 20:49:12 no-preload-956694 crio[838]: time="2025-11-26T20:49:12.471254468Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 26 20:49:14 no-preload-956694 crio[838]: time="2025-11-26T20:49:14.294786482Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=59ae3bfa-4b03-47f0-a0c4-a581bd4f1db8 name=/runtime.v1.ImageService/PullImage
	Nov 26 20:49:14 no-preload-956694 crio[838]: time="2025-11-26T20:49:14.295376511Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9c487ed8-b204-41c7-94e2-cd2faaabcafe name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:49:14 no-preload-956694 crio[838]: time="2025-11-26T20:49:14.296989244Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6e47a5f2-9eb0-48c1-a207-0327672031c4 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:49:14 no-preload-956694 crio[838]: time="2025-11-26T20:49:14.302589282Z" level=info msg="Creating container: default/busybox/busybox" id=56787b4f-49fa-4d56-83d2-16d49f51e8c3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:49:14 no-preload-956694 crio[838]: time="2025-11-26T20:49:14.302702263Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:49:14 no-preload-956694 crio[838]: time="2025-11-26T20:49:14.307584761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:49:14 no-preload-956694 crio[838]: time="2025-11-26T20:49:14.308458588Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:49:14 no-preload-956694 crio[838]: time="2025-11-26T20:49:14.323780053Z" level=info msg="Created container 8add1679f297d41cb16bf0e56d898c982fed707fe97b52d4a4775d55695f403d: default/busybox/busybox" id=56787b4f-49fa-4d56-83d2-16d49f51e8c3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:49:14 no-preload-956694 crio[838]: time="2025-11-26T20:49:14.324772736Z" level=info msg="Starting container: 8add1679f297d41cb16bf0e56d898c982fed707fe97b52d4a4775d55695f403d" id=b0d18b69-2557-4c1b-91a1-865c19d48d8a name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:49:14 no-preload-956694 crio[838]: time="2025-11-26T20:49:14.326779483Z" level=info msg="Started container" PID=2539 containerID=8add1679f297d41cb16bf0e56d898c982fed707fe97b52d4a4775d55695f403d description=default/busybox/busybox id=b0d18b69-2557-4c1b-91a1-865c19d48d8a name=/runtime.v1.RuntimeService/StartContainer sandboxID=25d44104f5abc8c7afe7f3e9a06544ff165efc28da621992002b0344c7523c80
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8add1679f297d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   25d44104f5abc       busybox                                     default
	5804e2ef6d9ce       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   521c2b018f5fb       coredns-66bc5c9577-4z56c                    kube-system
	4fda145a24442       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   1c2995d94ddcf       storage-provisioner                         kube-system
	258dd896f1411       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   90990f3574548       kindnet-dfdbx                               kube-system
	fedf0b2add3aa       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      26 seconds ago      Running             kube-proxy                0                   6912232c3a3cb       kube-proxy-2j4dg                            kube-system
	89df02dc4c312       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      37 seconds ago      Running             kube-scheduler            0                   806d3e5344c74       kube-scheduler-no-preload-956694            kube-system
	36a57b50518df       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      37 seconds ago      Running             kube-apiserver            0                   536d32c43107c       kube-apiserver-no-preload-956694            kube-system
	aaa97d1ae095e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      37 seconds ago      Running             kube-controller-manager   0                   69f3f51a1db13       kube-controller-manager-no-preload-956694   kube-system
	c6c7074e7e508       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      37 seconds ago      Running             etcd                      0                   b727c5a3d48ad       etcd-no-preload-956694                      kube-system
	
	
	==> coredns [5804e2ef6d9ce974c72219abcf71e5ac7f1d71f9f0f5d01222d0f25ab56e80b7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43722 - 62226 "HINFO IN 86822768627554314.7264717668899755262. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.011699777s
	
	
	==> describe nodes <==
	Name:               no-preload-956694
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-956694
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=no-preload-956694
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_48_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:48:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-956694
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:49:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:49:20 +0000   Wed, 26 Nov 2025 20:48:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:49:20 +0000   Wed, 26 Nov 2025 20:48:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:49:20 +0000   Wed, 26 Nov 2025 20:48:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:49:20 +0000   Wed, 26 Nov 2025 20:49:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-956694
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                ca0edc11-ec05-4f09-ac60-84d8767e18da
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-4z56c                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     26s
	  kube-system                 etcd-no-preload-956694                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-dfdbx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-956694             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-956694    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-2j4dg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-956694             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node no-preload-956694 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node no-preload-956694 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node no-preload-956694 status is now: NodeHasSufficientPID
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node no-preload-956694 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node no-preload-956694 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     31s                kubelet          Node no-preload-956694 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           27s                node-controller  Node no-preload-956694 event: Registered Node no-preload-956694 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-956694 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov26 20:21] overlayfs: idmapped layers are currently not supported
	[ +33.563196] overlayfs: idmapped layers are currently not supported
	[Nov26 20:23] overlayfs: idmapped layers are currently not supported
	[Nov26 20:24] overlayfs: idmapped layers are currently not supported
	[Nov26 20:25] overlayfs: idmapped layers are currently not supported
	[Nov26 20:27] overlayfs: idmapped layers are currently not supported
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	[Nov26 20:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c6c7074e7e508b0b142302ce836211ef8dde033bd14488eb0cf8cb873fe7f6b0] <==
	{"level":"warn","ts":"2025-11-26T20:48:45.965845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:45.981188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.006026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.025629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.039094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.060392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.071301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.117672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.151141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.176047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.211822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.222428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.252105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.252735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.268285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.285565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.306679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.324151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.337062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.358782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.368583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.402748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.418005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.435173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:48:46.521694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58140","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:49:21 up  1:31,  0 user,  load average: 1.56, 2.58, 2.27
	Linux no-preload-956694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [258dd896f1411b24f77d536b442bb2d6f3cea10fbe8bf1d41aa326b105364fe7] <==
	I1126 20:48:58.232839       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:48:58.326480       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:48:58.326682       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:48:58.326722       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:48:58.326765       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:48:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:48:58.529090       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:48:58.529120       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:48:58.529130       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:48:58.529226       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1126 20:48:58.725996       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:48:58.726032       1 metrics.go:72] Registering metrics
	I1126 20:48:58.726115       1 controller.go:711] "Syncing nftables rules"
	I1126 20:49:08.533129       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:49:08.533202       1 main.go:301] handling current node
	I1126 20:49:18.529555       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:49:18.529591       1 main.go:301] handling current node
	
	
	==> kube-apiserver [36a57b50518dfad2f9d63b0201ce5b9e83af0f14328ae803ff1d17c535a0ae97] <==
	I1126 20:48:47.387582       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:48:47.388391       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:48:47.393857       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1126 20:48:47.432047       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:48:47.456556       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1126 20:48:47.464528       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:48:47.489879       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:48:48.180531       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1126 20:48:48.190707       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1126 20:48:48.190732       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:48:49.024225       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:48:49.075240       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:48:49.193591       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1126 20:48:49.202328       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1126 20:48:49.203497       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:48:49.208710       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:48:49.337639       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:48:50.202433       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:48:50.242702       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1126 20:48:50.258799       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:48:55.061552       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1126 20:48:55.146767       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:48:55.318113       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:48:55.327938       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1126 20:49:20.270019       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:37362: use of closed network connection
	
	
	==> kube-controller-manager [aaa97d1ae095e49dc7796c272355cd79be20d8b085d0c5eb726a6137c085d22b] <==
	I1126 20:48:54.346683       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:48:54.346696       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:48:54.349640       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:48:54.352906       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:48:54.357266       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:48:54.357332       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:48:54.357339       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:48:54.357350       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:48:54.374045       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 20:48:54.374389       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 20:48:54.374447       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:48:54.374480       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:48:54.380869       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:48:54.397302       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:48:54.397342       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:48:54.397420       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1126 20:48:54.397452       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:48:54.397488       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 20:48:54.398199       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1126 20:48:54.398245       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:48:54.398275       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:48:54.398731       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1126 20:48:54.398827       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1126 20:48:54.408348       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-956694" podCIDRs=["10.244.0.0/24"]
	I1126 20:49:09.337123       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [fedf0b2add3aa89bad5e1fb02d2378150b2733974c4c6c9a55a6b671fb8f41ed] <==
	I1126 20:48:55.688019       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:48:55.823578       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:48:55.927881       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:48:55.932724       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1126 20:48:55.974993       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:48:56.155574       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:48:56.155636       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:48:56.172281       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:48:56.172575       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:48:56.172587       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:48:56.174009       1 config.go:200] "Starting service config controller"
	I1126 20:48:56.174030       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:48:56.176148       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:48:56.176162       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:48:56.176178       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:48:56.176182       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:48:56.180569       1 config.go:309] "Starting node config controller"
	I1126 20:48:56.180601       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:48:56.180610       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:48:56.274407       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:48:56.276639       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:48:56.276671       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [89df02dc4c31201a7cbc5a923b3ff57db141ffd03baad651e8b5583c0743648e] <==
	I1126 20:48:47.192932       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:48:48.825906       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:48:48.826046       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:48:48.826084       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:48:48.826112       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:48:48.859649       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:48:48.859770       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:48:48.862291       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:48:48.862373       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:48:48.862417       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:48:48.862469       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1126 20:48:48.878973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1126 20:48:50.166219       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:48:54 no-preload-956694 kubelet[2003]: I1126 20:48:54.442727    2003 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 26 20:48:54 no-preload-956694 kubelet[2003]: I1126 20:48:54.443473    2003 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 26 20:48:55 no-preload-956694 kubelet[2003]: I1126 20:48:55.148566    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ltmf\" (UniqueName: \"kubernetes.io/projected/c799d69f-b86f-4ef0-82b2-0b4200f9164f-kube-api-access-6ltmf\") pod \"kube-proxy-2j4dg\" (UID: \"c799d69f-b86f-4ef0-82b2-0b4200f9164f\") " pod="kube-system/kube-proxy-2j4dg"
	Nov 26 20:48:55 no-preload-956694 kubelet[2003]: I1126 20:48:55.148611    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c799d69f-b86f-4ef0-82b2-0b4200f9164f-lib-modules\") pod \"kube-proxy-2j4dg\" (UID: \"c799d69f-b86f-4ef0-82b2-0b4200f9164f\") " pod="kube-system/kube-proxy-2j4dg"
	Nov 26 20:48:55 no-preload-956694 kubelet[2003]: I1126 20:48:55.148632    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c799d69f-b86f-4ef0-82b2-0b4200f9164f-kube-proxy\") pod \"kube-proxy-2j4dg\" (UID: \"c799d69f-b86f-4ef0-82b2-0b4200f9164f\") " pod="kube-system/kube-proxy-2j4dg"
	Nov 26 20:48:55 no-preload-956694 kubelet[2003]: I1126 20:48:55.148657    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c799d69f-b86f-4ef0-82b2-0b4200f9164f-xtables-lock\") pod \"kube-proxy-2j4dg\" (UID: \"c799d69f-b86f-4ef0-82b2-0b4200f9164f\") " pod="kube-system/kube-proxy-2j4dg"
	Nov 26 20:48:55 no-preload-956694 kubelet[2003]: I1126 20:48:55.249304    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68b183f2-571b-476a-924c-7b0a22cfe302-xtables-lock\") pod \"kindnet-dfdbx\" (UID: \"68b183f2-571b-476a-924c-7b0a22cfe302\") " pod="kube-system/kindnet-dfdbx"
	Nov 26 20:48:55 no-preload-956694 kubelet[2003]: I1126 20:48:55.249351    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9rsw\" (UniqueName: \"kubernetes.io/projected/68b183f2-571b-476a-924c-7b0a22cfe302-kube-api-access-h9rsw\") pod \"kindnet-dfdbx\" (UID: \"68b183f2-571b-476a-924c-7b0a22cfe302\") " pod="kube-system/kindnet-dfdbx"
	Nov 26 20:48:55 no-preload-956694 kubelet[2003]: I1126 20:48:55.249384    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68b183f2-571b-476a-924c-7b0a22cfe302-lib-modules\") pod \"kindnet-dfdbx\" (UID: \"68b183f2-571b-476a-924c-7b0a22cfe302\") " pod="kube-system/kindnet-dfdbx"
	Nov 26 20:48:55 no-preload-956694 kubelet[2003]: I1126 20:48:55.249440    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/68b183f2-571b-476a-924c-7b0a22cfe302-cni-cfg\") pod \"kindnet-dfdbx\" (UID: \"68b183f2-571b-476a-924c-7b0a22cfe302\") " pod="kube-system/kindnet-dfdbx"
	Nov 26 20:48:55 no-preload-956694 kubelet[2003]: I1126 20:48:55.312145    2003 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 26 20:48:55 no-preload-956694 kubelet[2003]: W1126 20:48:55.434778    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/crio-6912232c3a3cb9d2578d75bcc636ac9091ef2443f5253a499dc46aa5cb5a2241 WatchSource:0}: Error finding container 6912232c3a3cb9d2578d75bcc636ac9091ef2443f5253a499dc46aa5cb5a2241: Status 404 returned error can't find the container with id 6912232c3a3cb9d2578d75bcc636ac9091ef2443f5253a499dc46aa5cb5a2241
	Nov 26 20:48:55 no-preload-956694 kubelet[2003]: W1126 20:48:55.503722    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/crio-90990f3574548a7c8ee781f9a462a06b90569ced9226e34170ff5b717af92b60 WatchSource:0}: Error finding container 90990f3574548a7c8ee781f9a462a06b90569ced9226e34170ff5b717af92b60: Status 404 returned error can't find the container with id 90990f3574548a7c8ee781f9a462a06b90569ced9226e34170ff5b717af92b60
	Nov 26 20:48:56 no-preload-956694 kubelet[2003]: I1126 20:48:56.342419    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2j4dg" podStartSLOduration=1.342393295 podStartE2EDuration="1.342393295s" podCreationTimestamp="2025-11-26 20:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:48:56.342209268 +0000 UTC m=+6.296675991" watchObservedRunningTime="2025-11-26 20:48:56.342393295 +0000 UTC m=+6.296860026"
	Nov 26 20:48:59 no-preload-956694 kubelet[2003]: I1126 20:48:59.464660    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dfdbx" podStartSLOduration=1.8543008429999999 podStartE2EDuration="4.464641505s" podCreationTimestamp="2025-11-26 20:48:55 +0000 UTC" firstStartedPulling="2025-11-26 20:48:55.507866764 +0000 UTC m=+5.462333487" lastFinishedPulling="2025-11-26 20:48:58.118207434 +0000 UTC m=+8.072674149" observedRunningTime="2025-11-26 20:48:58.340595593 +0000 UTC m=+8.295062316" watchObservedRunningTime="2025-11-26 20:48:59.464641505 +0000 UTC m=+9.419108228"
	Nov 26 20:49:08 no-preload-956694 kubelet[2003]: I1126 20:49:08.895671    2003 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 26 20:49:08 no-preload-956694 kubelet[2003]: I1126 20:49:08.953401    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adf50d03-764a-47f2-8b7b-85682915bd69-config-volume\") pod \"coredns-66bc5c9577-4z56c\" (UID: \"adf50d03-764a-47f2-8b7b-85682915bd69\") " pod="kube-system/coredns-66bc5c9577-4z56c"
	Nov 26 20:49:08 no-preload-956694 kubelet[2003]: I1126 20:49:08.953666    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c37b32d0-5da0-4557-91cf-d1d082be9471-tmp\") pod \"storage-provisioner\" (UID: \"c37b32d0-5da0-4557-91cf-d1d082be9471\") " pod="kube-system/storage-provisioner"
	Nov 26 20:49:08 no-preload-956694 kubelet[2003]: I1126 20:49:08.953800    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nnsd\" (UniqueName: \"kubernetes.io/projected/c37b32d0-5da0-4557-91cf-d1d082be9471-kube-api-access-9nnsd\") pod \"storage-provisioner\" (UID: \"c37b32d0-5da0-4557-91cf-d1d082be9471\") " pod="kube-system/storage-provisioner"
	Nov 26 20:49:08 no-preload-956694 kubelet[2003]: I1126 20:49:08.953985    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smw4g\" (UniqueName: \"kubernetes.io/projected/adf50d03-764a-47f2-8b7b-85682915bd69-kube-api-access-smw4g\") pod \"coredns-66bc5c9577-4z56c\" (UID: \"adf50d03-764a-47f2-8b7b-85682915bd69\") " pod="kube-system/coredns-66bc5c9577-4z56c"
	Nov 26 20:49:09 no-preload-956694 kubelet[2003]: W1126 20:49:09.281060    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/crio-521c2b018f5fb3abe5570c4675fc385c6778b7013b68190183427b011ac61b70 WatchSource:0}: Error finding container 521c2b018f5fb3abe5570c4675fc385c6778b7013b68190183427b011ac61b70: Status 404 returned error can't find the container with id 521c2b018f5fb3abe5570c4675fc385c6778b7013b68190183427b011ac61b70
	Nov 26 20:49:09 no-preload-956694 kubelet[2003]: I1126 20:49:09.423903    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4z56c" podStartSLOduration=14.423885439 podStartE2EDuration="14.423885439s" podCreationTimestamp="2025-11-26 20:48:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:49:09.383482464 +0000 UTC m=+19.337949187" watchObservedRunningTime="2025-11-26 20:49:09.423885439 +0000 UTC m=+19.378352153"
	Nov 26 20:49:10 no-preload-956694 kubelet[2003]: I1126 20:49:10.385282    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.385263658 podStartE2EDuration="14.385263658s" podCreationTimestamp="2025-11-26 20:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:49:09.426738215 +0000 UTC m=+19.381204946" watchObservedRunningTime="2025-11-26 20:49:10.385263658 +0000 UTC m=+20.339730390"
	Nov 26 20:49:12 no-preload-956694 kubelet[2003]: I1126 20:49:12.180573    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbzl9\" (UniqueName: \"kubernetes.io/projected/b82900f4-b9ca-4f50-9ac6-95bb86374236-kube-api-access-jbzl9\") pod \"busybox\" (UID: \"b82900f4-b9ca-4f50-9ac6-95bb86374236\") " pod="default/busybox"
	Nov 26 20:49:12 no-preload-956694 kubelet[2003]: W1126 20:49:12.465588    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/crio-25d44104f5abc8c7afe7f3e9a06544ff165efc28da621992002b0344c7523c80 WatchSource:0}: Error finding container 25d44104f5abc8c7afe7f3e9a06544ff165efc28da621992002b0344c7523c80: Status 404 returned error can't find the container with id 25d44104f5abc8c7afe7f3e9a06544ff165efc28da621992002b0344c7523c80
	
	
	==> storage-provisioner [4fda145a24442c30109e083456922019381d8416793002fa8f0610a6d24bf3be] <==
	I1126 20:49:09.306106       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:49:09.323253       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:49:09.323368       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:49:09.331868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:09.338511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:49:09.338860       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:49:09.341545       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-956694_53e59d5a-19f6-47db-b109-0abf7db07765!
	I1126 20:49:09.372655       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa3c6d99-6069-4dc6-b561-d2344160065e", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-956694_53e59d5a-19f6-47db-b109-0abf7db07765 became leader
	W1126 20:49:09.386103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:09.417980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:49:09.446247       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-956694_53e59d5a-19f6-47db-b109-0abf7db07765!
	W1126 20:49:11.421475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:11.425719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:13.428476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:13.432711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:15.435695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:15.439795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:17.443390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:17.448001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:19.451621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:19.456148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:21.459694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:49:21.467614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-956694 -n no-preload-956694
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-956694 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-956694 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-956694 --alsologtostderr -v=1: exit status 80 (1.841252112s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-956694 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:50:45.668238  218087 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:50:45.668383  218087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:50:45.668394  218087 out.go:374] Setting ErrFile to fd 2...
	I1126 20:50:45.668402  218087 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:50:45.668684  218087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:50:45.668971  218087 out.go:368] Setting JSON to false
	I1126 20:50:45.668993  218087 mustload.go:66] Loading cluster: no-preload-956694
	I1126 20:50:45.669434  218087 config.go:182] Loaded profile config "no-preload-956694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:50:45.670100  218087 cli_runner.go:164] Run: docker container inspect no-preload-956694 --format={{.State.Status}}
	I1126 20:50:45.688133  218087 host.go:66] Checking if "no-preload-956694" exists ...
	I1126 20:50:45.688453  218087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:50:45.761806  218087 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-26 20:50:45.75185115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:50:45.762586  218087 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-956694 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1126 20:50:45.766105  218087 out.go:179] * Pausing node no-preload-956694 ... 
	I1126 20:50:45.768988  218087 host.go:66] Checking if "no-preload-956694" exists ...
	I1126 20:50:45.769333  218087 ssh_runner.go:195] Run: systemctl --version
	I1126 20:50:45.769383  218087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-956694
	I1126 20:50:45.786945  218087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/no-preload-956694/id_rsa Username:docker}
	I1126 20:50:45.897106  218087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:50:45.910667  218087 pause.go:52] kubelet running: true
	I1126 20:50:45.910749  218087 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:50:46.173364  218087 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:50:46.173458  218087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:50:46.250692  218087 cri.go:89] found id: "6a0c09bf8b235fc0d759a84c8b8fdceafe61508be112ec1cf5b51a0d6b389fa7"
	I1126 20:50:46.250719  218087 cri.go:89] found id: "0554e6955b891b84949248d4dd7484a05d62ffe5fb5cc50417b0300d8db3c64e"
	I1126 20:50:46.250724  218087 cri.go:89] found id: "9367fa09811bc7824710f65db213810a28f4a5b2e9e228aec215eff41118f2d9"
	I1126 20:50:46.250727  218087 cri.go:89] found id: "39dbe8551a73859abffe10915c3f3e6c1fd1869e9b974e6953b486b1a5d2578d"
	I1126 20:50:46.250730  218087 cri.go:89] found id: "fe095a7725bd274ab36ace78665c689e31b9870d45c3f58f42466f2b19ca1bac"
	I1126 20:50:46.250734  218087 cri.go:89] found id: "64bf641df6328d766e26a8b3d40eb3a629a1b6d5034073ad5e5eacc3049b071b"
	I1126 20:50:46.250737  218087 cri.go:89] found id: "69bdaac7802d27e42ed29500b6c8549fd05c61287e8c9653748bb2accdeae2e1"
	I1126 20:50:46.250739  218087 cri.go:89] found id: "166f0bf71ff637391d7021779d2e2a5d27dea53b2e94af5da7c6556cf939eefc"
	I1126 20:50:46.250743  218087 cri.go:89] found id: "732f8dd674b2a79542d6b5db5ae656af930d6da79a225d1e0dbcfdec933c1b97"
	I1126 20:50:46.250749  218087 cri.go:89] found id: "29aceaa82429db92b12b0fa7cd1c23589c67124c5ba0a8f019d64c3035e55cf4"
	I1126 20:50:46.250752  218087 cri.go:89] found id: "ac76226123cfd439ff139dd22351505e7daee346df3e1e19dcb7f7d973283462"
	I1126 20:50:46.250755  218087 cri.go:89] found id: ""
	I1126 20:50:46.250805  218087 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:50:46.271202  218087 retry.go:31] will retry after 368.450637ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:50:46Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:50:46.640908  218087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:50:46.655175  218087 pause.go:52] kubelet running: false
	I1126 20:50:46.655242  218087 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:50:46.819451  218087 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:50:46.819569  218087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:50:46.890577  218087 cri.go:89] found id: "6a0c09bf8b235fc0d759a84c8b8fdceafe61508be112ec1cf5b51a0d6b389fa7"
	I1126 20:50:46.890608  218087 cri.go:89] found id: "0554e6955b891b84949248d4dd7484a05d62ffe5fb5cc50417b0300d8db3c64e"
	I1126 20:50:46.890614  218087 cri.go:89] found id: "9367fa09811bc7824710f65db213810a28f4a5b2e9e228aec215eff41118f2d9"
	I1126 20:50:46.890618  218087 cri.go:89] found id: "39dbe8551a73859abffe10915c3f3e6c1fd1869e9b974e6953b486b1a5d2578d"
	I1126 20:50:46.890621  218087 cri.go:89] found id: "fe095a7725bd274ab36ace78665c689e31b9870d45c3f58f42466f2b19ca1bac"
	I1126 20:50:46.890624  218087 cri.go:89] found id: "64bf641df6328d766e26a8b3d40eb3a629a1b6d5034073ad5e5eacc3049b071b"
	I1126 20:50:46.890654  218087 cri.go:89] found id: "69bdaac7802d27e42ed29500b6c8549fd05c61287e8c9653748bb2accdeae2e1"
	I1126 20:50:46.890660  218087 cri.go:89] found id: "166f0bf71ff637391d7021779d2e2a5d27dea53b2e94af5da7c6556cf939eefc"
	I1126 20:50:46.890663  218087 cri.go:89] found id: "732f8dd674b2a79542d6b5db5ae656af930d6da79a225d1e0dbcfdec933c1b97"
	I1126 20:50:46.890669  218087 cri.go:89] found id: "29aceaa82429db92b12b0fa7cd1c23589c67124c5ba0a8f019d64c3035e55cf4"
	I1126 20:50:46.890676  218087 cri.go:89] found id: "ac76226123cfd439ff139dd22351505e7daee346df3e1e19dcb7f7d973283462"
	I1126 20:50:46.890680  218087 cri.go:89] found id: ""
	I1126 20:50:46.890739  218087 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:50:46.901592  218087 retry.go:31] will retry after 236.179368ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:50:46Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:50:47.138044  218087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:50:47.158434  218087 pause.go:52] kubelet running: false
	I1126 20:50:47.158538  218087 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:50:47.340562  218087 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:50:47.340666  218087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:50:47.413586  218087 cri.go:89] found id: "6a0c09bf8b235fc0d759a84c8b8fdceafe61508be112ec1cf5b51a0d6b389fa7"
	I1126 20:50:47.413609  218087 cri.go:89] found id: "0554e6955b891b84949248d4dd7484a05d62ffe5fb5cc50417b0300d8db3c64e"
	I1126 20:50:47.413615  218087 cri.go:89] found id: "9367fa09811bc7824710f65db213810a28f4a5b2e9e228aec215eff41118f2d9"
	I1126 20:50:47.413619  218087 cri.go:89] found id: "39dbe8551a73859abffe10915c3f3e6c1fd1869e9b974e6953b486b1a5d2578d"
	I1126 20:50:47.413623  218087 cri.go:89] found id: "fe095a7725bd274ab36ace78665c689e31b9870d45c3f58f42466f2b19ca1bac"
	I1126 20:50:47.413626  218087 cri.go:89] found id: "64bf641df6328d766e26a8b3d40eb3a629a1b6d5034073ad5e5eacc3049b071b"
	I1126 20:50:47.413634  218087 cri.go:89] found id: "69bdaac7802d27e42ed29500b6c8549fd05c61287e8c9653748bb2accdeae2e1"
	I1126 20:50:47.413637  218087 cri.go:89] found id: "166f0bf71ff637391d7021779d2e2a5d27dea53b2e94af5da7c6556cf939eefc"
	I1126 20:50:47.413640  218087 cri.go:89] found id: "732f8dd674b2a79542d6b5db5ae656af930d6da79a225d1e0dbcfdec933c1b97"
	I1126 20:50:47.413646  218087 cri.go:89] found id: "29aceaa82429db92b12b0fa7cd1c23589c67124c5ba0a8f019d64c3035e55cf4"
	I1126 20:50:47.413649  218087 cri.go:89] found id: "ac76226123cfd439ff139dd22351505e7daee346df3e1e19dcb7f7d973283462"
	I1126 20:50:47.413652  218087 cri.go:89] found id: ""
	I1126 20:50:47.413700  218087 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:50:47.429499  218087 out.go:203] 
	W1126 20:50:47.432479  218087 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:50:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:50:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 20:50:47.432504  218087 out.go:285] * 
	* 
	W1126 20:50:47.438552  218087 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 20:50:47.441455  218087 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-956694 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-956694
helpers_test.go:243: (dbg) docker inspect no-preload-956694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b",
	        "Created": "2025-11-26T20:48:11.257955221Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213100,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:49:35.316478343Z",
	            "FinishedAt": "2025-11-26T20:49:34.266543458Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/hostname",
	        "HostsPath": "/var/lib/docker/containers/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/hosts",
	        "LogPath": "/var/lib/docker/containers/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b-json.log",
	        "Name": "/no-preload-956694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-956694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-956694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b",
	                "LowerDir": "/var/lib/docker/overlay2/0080b323bab4635def865bc48fab6d44d62fded9322f96dda189563e0aed4165-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0080b323bab4635def865bc48fab6d44d62fded9322f96dda189563e0aed4165/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0080b323bab4635def865bc48fab6d44d62fded9322f96dda189563e0aed4165/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0080b323bab4635def865bc48fab6d44d62fded9322f96dda189563e0aed4165/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-956694",
	                "Source": "/var/lib/docker/volumes/no-preload-956694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-956694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-956694",
	                "name.minikube.sigs.k8s.io": "no-preload-956694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d686f6f27aef486c06404574de0c4ae344714a7029d94221211ab1f31ad7896",
	            "SandboxKey": "/var/run/docker/netns/1d686f6f27ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-956694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:8b:4b:9d:e5:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "32516947827eacd2aa341e65200cd5dd0564df7db92f9b17b625c9371ac2deac",
	                    "EndpointID": "a0b2078c7e066324d5ef49c0f64bd6081628035ebd9b5da162ec361fc6bf51be",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-956694",
	                        "53e8b694caf6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-956694 -n no-preload-956694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-956694 -n no-preload-956694: exit status 2 (338.49168ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-956694 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-956694 logs -n 25: (1.314391187s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-164741   │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ delete  │ -p force-systemd-env-274518                                                                                                                                                                                                                   │ force-systemd-env-274518 │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p cert-options-207115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-207115      │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:45 UTC │
	│ ssh     │ cert-options-207115 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-207115      │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ ssh     │ -p cert-options-207115 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-207115      │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ delete  │ -p cert-options-207115                                                                                                                                                                                                                        │ cert-options-207115      │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-264537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │                     │
	│ stop    │ -p old-k8s-version-264537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-264537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:47 UTC │
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164741   │ jenkins │ v1.37.0 │ 26 Nov 25 20:47 UTC │ 26 Nov 25 20:49 UTC │
	│ image   │ old-k8s-version-264537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ pause   │ -p old-k8s-version-264537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │                     │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                                                                                     │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                                                                                     │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable metrics-server -p no-preload-956694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │                     │
	│ stop    │ -p no-preload-956694 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable dashboard -p no-preload-956694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p cert-expiration-164741                                                                                                                                                                                                                     │ cert-expiration-164741   │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-616586       │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │                     │
	│ image   │ no-preload-956694 image list --format=json                                                                                                                                                                                                    │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ pause   │ -p no-preload-956694 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:49:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:49:45.715424  214963 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:49:45.715639  214963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:49:45.715668  214963 out.go:374] Setting ErrFile to fd 2...
	I1126 20:49:45.715687  214963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:49:45.715970  214963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:49:45.716418  214963 out.go:368] Setting JSON to false
	I1126 20:49:45.717415  214963 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5516,"bootTime":1764184670,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:49:45.717506  214963 start.go:143] virtualization:  
	I1126 20:49:45.721672  214963 out.go:179] * [embed-certs-616586] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:49:45.726345  214963 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:49:45.726407  214963 notify.go:221] Checking for updates...
	I1126 20:49:45.733117  214963 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:49:45.736426  214963 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:49:45.739723  214963 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:49:45.743002  214963 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:49:45.746718  214963 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:49:45.750467  214963 config.go:182] Loaded profile config "no-preload-956694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:49:45.750629  214963 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:49:45.808693  214963 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:49:45.808811  214963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:49:45.924761  214963 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-26 20:49:45.908978023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:49:45.924866  214963 docker.go:319] overlay module found
	I1126 20:49:45.928197  214963 out.go:179] * Using the docker driver based on user configuration
	I1126 20:49:45.931161  214963 start.go:309] selected driver: docker
	I1126 20:49:45.931186  214963 start.go:927] validating driver "docker" against <nil>
	I1126 20:49:45.931200  214963 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:49:45.931955  214963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:49:46.033996  214963 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-26 20:49:46.021019422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:49:46.034185  214963 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 20:49:46.034407  214963 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:49:46.037353  214963 out.go:179] * Using Docker driver with root privileges
	I1126 20:49:46.040251  214963 cni.go:84] Creating CNI manager for ""
	I1126 20:49:46.040326  214963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:49:46.040344  214963 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 20:49:46.040427  214963 start.go:353] cluster config:
	{Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:49:46.043659  214963 out.go:179] * Starting "embed-certs-616586" primary control-plane node in "embed-certs-616586" cluster
	I1126 20:49:46.046648  214963 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:49:46.049643  214963 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:49:46.052481  214963 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:49:46.052537  214963 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:49:46.052552  214963 cache.go:65] Caching tarball of preloaded images
	I1126 20:49:46.052642  214963 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:49:46.052657  214963 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:49:46.052766  214963 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/config.json ...
	I1126 20:49:46.052793  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/config.json: {Name:mkfc2b593589c46372a12f9fd8a847f9f5683e74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:49:46.052962  214963 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:49:46.077125  214963 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:49:46.077144  214963 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:49:46.077160  214963 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:49:46.077190  214963 start.go:360] acquireMachinesLock for embed-certs-616586: {Name:mka5254437f68c39e0c98d2ff47cae58581678c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:49:46.077298  214963 start.go:364] duration metric: took 93.371µs to acquireMachinesLock for "embed-certs-616586"
	I1126 20:49:46.077322  214963 start.go:93] Provisioning new machine with config: &{Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:49:46.077390  214963 start.go:125] createHost starting for "" (driver="docker")
	I1126 20:49:44.931609  212927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:49:44.981265  212927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:49:44.981565  212927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:49:45.038216  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:49:45.038246  212927 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:49:45.190496  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:49:45.190569  212927 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:49:45.326171  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:49:45.326193  212927 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:49:45.485743  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:49:45.485763  212927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:49:45.518419  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:49:45.518439  212927 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:49:45.558454  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:49:45.558480  212927 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:49:45.578627  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:49:45.578647  212927 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:49:45.596966  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:49:45.596987  212927 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:49:45.658160  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:49:45.658182  212927 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:49:45.718387  212927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:49:46.080640  214963 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:49:46.080873  214963 start.go:159] libmachine.API.Create for "embed-certs-616586" (driver="docker")
	I1126 20:49:46.080913  214963 client.go:173] LocalClient.Create starting
	I1126 20:49:46.081015  214963 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem
	I1126 20:49:46.081054  214963 main.go:143] libmachine: Decoding PEM data...
	I1126 20:49:46.081073  214963 main.go:143] libmachine: Parsing certificate...
	I1126 20:49:46.081133  214963 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem
	I1126 20:49:46.081150  214963 main.go:143] libmachine: Decoding PEM data...
	I1126 20:49:46.081162  214963 main.go:143] libmachine: Parsing certificate...
	I1126 20:49:46.081547  214963 cli_runner.go:164] Run: docker network inspect embed-certs-616586 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:49:46.114345  214963 cli_runner.go:211] docker network inspect embed-certs-616586 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:49:46.114424  214963 network_create.go:284] running [docker network inspect embed-certs-616586] to gather additional debugging logs...
	I1126 20:49:46.114441  214963 cli_runner.go:164] Run: docker network inspect embed-certs-616586
	W1126 20:49:46.146105  214963 cli_runner.go:211] docker network inspect embed-certs-616586 returned with exit code 1
	I1126 20:49:46.146133  214963 network_create.go:287] error running [docker network inspect embed-certs-616586]: docker network inspect embed-certs-616586: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-616586 not found
	I1126 20:49:46.146148  214963 network_create.go:289] output of [docker network inspect embed-certs-616586]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-616586 not found
	
	** /stderr **
	I1126 20:49:46.146255  214963 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:49:46.169829  214963 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20cb65a83ad5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:26:47:2b:2e:03} reservation:<nil>}
	I1126 20:49:46.170239  214963 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-16105a7ff776 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:75:f6:9d:ad:ac} reservation:<nil>}
	I1126 20:49:46.170569  214963 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f1c69ea9dfa3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b7:bf:8a:44:80} reservation:<nil>}
	I1126 20:49:46.170805  214963 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-32516947827e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:91:1e:d5:75:89} reservation:<nil>}
	I1126 20:49:46.171194  214963 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a22060}
	I1126 20:49:46.171212  214963 network_create.go:124] attempt to create docker network embed-certs-616586 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1126 20:49:46.171265  214963 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-616586 embed-certs-616586
	I1126 20:49:46.257500  214963 network_create.go:108] docker network embed-certs-616586 192.168.85.0/24 created
	I1126 20:49:46.257529  214963 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-616586" container
	I1126 20:49:46.257604  214963 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:49:46.290159  214963 cli_runner.go:164] Run: docker volume create embed-certs-616586 --label name.minikube.sigs.k8s.io=embed-certs-616586 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:49:46.318051  214963 oci.go:103] Successfully created a docker volume embed-certs-616586
	I1126 20:49:46.318140  214963 cli_runner.go:164] Run: docker run --rm --name embed-certs-616586-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-616586 --entrypoint /usr/bin/test -v embed-certs-616586:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:49:47.028412  214963 oci.go:107] Successfully prepared a docker volume embed-certs-616586
	I1126 20:49:47.028470  214963 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:49:47.028480  214963 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:49:47.028548  214963 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-616586:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 20:49:54.698522  212927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.766875402s)
	I1126 20:49:54.698585  212927 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.717296264s)
	I1126 20:49:54.698610  212927 node_ready.go:35] waiting up to 6m0s for node "no-preload-956694" to be "Ready" ...
	I1126 20:49:54.698920  212927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.717332307s)
	I1126 20:49:54.720089  212927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.001666006s)
	I1126 20:49:54.723306  212927 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-956694 addons enable metrics-server
	
	I1126 20:49:54.738195  212927 node_ready.go:49] node "no-preload-956694" is "Ready"
	I1126 20:49:54.738225  212927 node_ready.go:38] duration metric: took 39.587714ms for node "no-preload-956694" to be "Ready" ...
	I1126 20:49:54.738238  212927 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:49:54.738292  212927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:49:54.746419  212927 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1126 20:49:54.749152  212927 addons.go:530] duration metric: took 10.284556136s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1126 20:49:54.752730  212927 api_server.go:72] duration metric: took 10.288493985s to wait for apiserver process to appear ...
	I1126 20:49:54.752753  212927 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:49:54.752771  212927 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:49:54.764603  212927 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1126 20:49:54.765825  212927 api_server.go:141] control plane version: v1.34.1
	I1126 20:49:54.765852  212927 api_server.go:131] duration metric: took 13.091823ms to wait for apiserver health ...
	I1126 20:49:54.765862  212927 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:49:54.770111  212927 system_pods.go:59] 8 kube-system pods found
	I1126 20:49:54.770152  212927 system_pods.go:61] "coredns-66bc5c9577-4z56c" [adf50d03-764a-47f2-8b7b-85682915bd69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:49:54.770161  212927 system_pods.go:61] "etcd-no-preload-956694" [30a458d9-4cc3-4efc-ac03-1c12fd3467b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:49:54.770170  212927 system_pods.go:61] "kindnet-dfdbx" [68b183f2-571b-476a-924c-7b0a22cfe302] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:49:54.770177  212927 system_pods.go:61] "kube-apiserver-no-preload-956694" [19dfb0a5-0634-42eb-b9a2-44bf5665b3ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:49:54.770185  212927 system_pods.go:61] "kube-controller-manager-no-preload-956694" [56618fe0-6b76-493c-986e-3acf20cc0c46] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:49:54.770191  212927 system_pods.go:61] "kube-proxy-2j4dg" [c799d69f-b86f-4ef0-82b2-0b4200f9164f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:49:54.770200  212927 system_pods.go:61] "kube-scheduler-no-preload-956694" [07469dd8-7c87-4bea-8dda-a24815aa6db1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:49:54.770204  212927 system_pods.go:61] "storage-provisioner" [c37b32d0-5da0-4557-91cf-d1d082be9471] Running
	I1126 20:49:54.770218  212927 system_pods.go:74] duration metric: took 4.349339ms to wait for pod list to return data ...
	I1126 20:49:54.770225  212927 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:49:54.773879  212927 default_sa.go:45] found service account: "default"
	I1126 20:49:54.773907  212927 default_sa.go:55] duration metric: took 3.671189ms for default service account to be created ...
	I1126 20:49:54.773946  212927 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:49:54.777446  212927 system_pods.go:86] 8 kube-system pods found
	I1126 20:49:54.777480  212927 system_pods.go:89] "coredns-66bc5c9577-4z56c" [adf50d03-764a-47f2-8b7b-85682915bd69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:49:54.777489  212927 system_pods.go:89] "etcd-no-preload-956694" [30a458d9-4cc3-4efc-ac03-1c12fd3467b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:49:54.777499  212927 system_pods.go:89] "kindnet-dfdbx" [68b183f2-571b-476a-924c-7b0a22cfe302] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:49:54.777506  212927 system_pods.go:89] "kube-apiserver-no-preload-956694" [19dfb0a5-0634-42eb-b9a2-44bf5665b3ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:49:54.777514  212927 system_pods.go:89] "kube-controller-manager-no-preload-956694" [56618fe0-6b76-493c-986e-3acf20cc0c46] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:49:54.777520  212927 system_pods.go:89] "kube-proxy-2j4dg" [c799d69f-b86f-4ef0-82b2-0b4200f9164f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:49:54.777527  212927 system_pods.go:89] "kube-scheduler-no-preload-956694" [07469dd8-7c87-4bea-8dda-a24815aa6db1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:49:54.777533  212927 system_pods.go:89] "storage-provisioner" [c37b32d0-5da0-4557-91cf-d1d082be9471] Running
	I1126 20:49:54.777540  212927 system_pods.go:126] duration metric: took 3.588468ms to wait for k8s-apps to be running ...
	I1126 20:49:54.777553  212927 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:49:54.777605  212927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:49:54.791301  212927 system_svc.go:56] duration metric: took 13.739812ms WaitForService to wait for kubelet
	I1126 20:49:54.791329  212927 kubeadm.go:587] duration metric: took 10.327095842s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:49:54.791348  212927 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:49:54.795450  212927 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:49:54.795487  212927 node_conditions.go:123] node cpu capacity is 2
	I1126 20:49:54.795507  212927 node_conditions.go:105] duration metric: took 4.153751ms to run NodePressure ...
	I1126 20:49:54.795522  212927 start.go:242] waiting for startup goroutines ...
	I1126 20:49:54.795546  212927 start.go:247] waiting for cluster config update ...
	I1126 20:49:54.795565  212927 start.go:256] writing updated cluster config ...
	I1126 20:49:54.795841  212927 ssh_runner.go:195] Run: rm -f paused
	I1126 20:49:54.803640  212927 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:49:54.807132  212927 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4z56c" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:51.667139  214963 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-616586:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.638556373s)
	I1126 20:49:51.667174  214963 kic.go:203] duration metric: took 4.638690629s to extract preloaded images to volume ...
	W1126 20:49:51.667312  214963 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1126 20:49:51.667432  214963 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:49:51.778630  214963 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-616586 --name embed-certs-616586 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-616586 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-616586 --network embed-certs-616586 --ip 192.168.85.2 --volume embed-certs-616586:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:49:52.267221  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Running}}
	I1126 20:49:52.296441  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:49:52.333432  214963 cli_runner.go:164] Run: docker exec embed-certs-616586 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:49:52.411604  214963 oci.go:144] the created container "embed-certs-616586" has a running status.
	I1126 20:49:52.411629  214963 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa...
	I1126 20:49:52.999031  214963 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:49:53.020668  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:49:53.043491  214963 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:49:53.043510  214963 kic_runner.go:114] Args: [docker exec --privileged embed-certs-616586 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:49:53.127201  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:49:53.155415  214963 machine.go:94] provisionDockerMachine start ...
	I1126 20:49:53.155586  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:53.183062  214963 main.go:143] libmachine: Using SSH client type: native
	I1126 20:49:53.183410  214963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1126 20:49:53.183424  214963 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:49:53.184214  214963 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50420->127.0.0.1:33063: read: connection reset by peer
	W1126 20:49:56.813253  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:49:58.818695  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:49:56.333671  214963 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-616586
	
	I1126 20:49:56.333696  214963 ubuntu.go:182] provisioning hostname "embed-certs-616586"
	I1126 20:49:56.333763  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:56.367344  214963 main.go:143] libmachine: Using SSH client type: native
	I1126 20:49:56.367661  214963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1126 20:49:56.367677  214963 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-616586 && echo "embed-certs-616586" | sudo tee /etc/hostname
	I1126 20:49:56.547573  214963 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-616586
	
	I1126 20:49:56.547666  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:56.568715  214963 main.go:143] libmachine: Using SSH client type: native
	I1126 20:49:56.569043  214963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1126 20:49:56.569064  214963 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-616586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-616586/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-616586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:49:56.737946  214963 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:49:56.737974  214963 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:49:56.738003  214963 ubuntu.go:190] setting up certificates
	I1126 20:49:56.738013  214963 provision.go:84] configureAuth start
	I1126 20:49:56.738069  214963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-616586
	I1126 20:49:56.755224  214963 provision.go:143] copyHostCerts
	I1126 20:49:56.755285  214963 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:49:56.755294  214963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:49:56.755368  214963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:49:56.755484  214963 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:49:56.755494  214963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:49:56.755520  214963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:49:56.755568  214963 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:49:56.755573  214963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:49:56.755597  214963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:49:56.755640  214963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.embed-certs-616586 san=[127.0.0.1 192.168.85.2 embed-certs-616586 localhost minikube]
	I1126 20:49:57.069867  214963 provision.go:177] copyRemoteCerts
	I1126 20:49:57.069981  214963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:49:57.070028  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:57.087034  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:49:57.195097  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:49:57.224471  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:49:57.245971  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1126 20:49:57.268387  214963 provision.go:87] duration metric: took 530.350086ms to configureAuth
	I1126 20:49:57.268465  214963 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:49:57.268702  214963 config.go:182] Loaded profile config "embed-certs-616586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:49:57.268900  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:57.284926  214963 main.go:143] libmachine: Using SSH client type: native
	I1126 20:49:57.285231  214963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1126 20:49:57.285245  214963 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:49:57.605655  214963 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:49:57.605678  214963 machine.go:97] duration metric: took 4.450244674s to provisionDockerMachine
	I1126 20:49:57.605688  214963 client.go:176] duration metric: took 11.524769101s to LocalClient.Create
	I1126 20:49:57.605699  214963 start.go:167] duration metric: took 11.524828447s to libmachine.API.Create "embed-certs-616586"
	I1126 20:49:57.605706  214963 start.go:293] postStartSetup for "embed-certs-616586" (driver="docker")
	I1126 20:49:57.605716  214963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:49:57.605787  214963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:49:57.605836  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:57.623323  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:49:57.730112  214963 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:49:57.733545  214963 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:49:57.733575  214963 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:49:57.733587  214963 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:49:57.733641  214963 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:49:57.733733  214963 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:49:57.733839  214963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:49:57.741946  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:49:57.769459  214963 start.go:296] duration metric: took 163.73825ms for postStartSetup
	I1126 20:49:57.769878  214963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-616586
	I1126 20:49:57.799111  214963 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/config.json ...
	I1126 20:49:57.799412  214963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:49:57.799458  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:57.831988  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:49:57.934928  214963 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:49:57.940415  214963 start.go:128] duration metric: took 11.863009831s to createHost
	I1126 20:49:57.940441  214963 start.go:83] releasing machines lock for "embed-certs-616586", held for 11.863134381s
	I1126 20:49:57.940527  214963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-616586
	I1126 20:49:57.958692  214963 ssh_runner.go:195] Run: cat /version.json
	I1126 20:49:57.958754  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:57.958969  214963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:49:57.959028  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:57.981348  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:49:57.998374  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:49:58.094130  214963 ssh_runner.go:195] Run: systemctl --version
	I1126 20:49:58.205160  214963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:49:58.259501  214963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:49:58.264372  214963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:49:58.264475  214963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:49:58.294476  214963 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1126 20:49:58.294510  214963 start.go:496] detecting cgroup driver to use...
	I1126 20:49:58.294579  214963 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:49:58.294648  214963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:49:58.322624  214963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:49:58.340653  214963 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:49:58.340750  214963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:49:58.358884  214963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:49:58.380260  214963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:49:58.550548  214963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:49:58.722709  214963 docker.go:234] disabling docker service ...
	I1126 20:49:58.722820  214963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:49:58.753799  214963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:49:58.769228  214963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:49:58.955834  214963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:49:59.128685  214963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:49:59.144825  214963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:49:59.171794  214963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:49:59.171917  214963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.182678  214963 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:49:59.182796  214963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.197348  214963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.215834  214963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.225964  214963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:49:59.237851  214963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.247650  214963 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.263169  214963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.273604  214963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:49:59.282879  214963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:49:59.291619  214963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:49:59.457805  214963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:49:59.745627  214963 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:49:59.745748  214963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:49:59.749961  214963 start.go:564] Will wait 60s for crictl version
	I1126 20:49:59.750069  214963 ssh_runner.go:195] Run: which crictl
	I1126 20:49:59.754370  214963 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:49:59.791343  214963 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:49:59.791485  214963 ssh_runner.go:195] Run: crio --version
	I1126 20:49:59.835460  214963 ssh_runner.go:195] Run: crio --version
	I1126 20:49:59.873973  214963 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:49:59.877297  214963 cli_runner.go:164] Run: docker network inspect embed-certs-616586 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:49:59.902624  214963 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:49:59.906115  214963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:49:59.916943  214963 kubeadm.go:884] updating cluster {Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:49:59.917070  214963 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:49:59.917128  214963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:49:59.976239  214963 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:49:59.976259  214963 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:49:59.976313  214963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:50:00.019132  214963 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:50:00.019155  214963 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:50:00.019164  214963 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1126 20:50:00.019285  214963 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-616586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:50:00.019386  214963 ssh_runner.go:195] Run: crio config
	I1126 20:50:00.102494  214963 cni.go:84] Creating CNI manager for ""
	I1126 20:50:00.102562  214963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:50:00.102615  214963 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:50:00.102657  214963 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-616586 NodeName:embed-certs-616586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:50:00.102835  214963 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-616586"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:50:00.102937  214963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:50:00.116384  214963 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:50:00.116520  214963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:50:00.128628  214963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1126 20:50:00.153439  214963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:50:00.175351  214963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1126 20:50:00.203711  214963 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:50:00.209088  214963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:50:00.224206  214963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:50:00.505495  214963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:50:00.530979  214963 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586 for IP: 192.168.85.2
	I1126 20:50:00.531057  214963 certs.go:195] generating shared ca certs ...
	I1126 20:50:00.531089  214963 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:00.531298  214963 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:50:00.531383  214963 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:50:00.531418  214963 certs.go:257] generating profile certs ...
	I1126 20:50:00.531496  214963 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.key
	I1126 20:50:00.531533  214963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.crt with IP's: []
	I1126 20:50:00.669552  214963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.crt ...
	I1126 20:50:00.669636  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.crt: {Name:mk8f6fd090b2026e4512f84966bafebc39935caf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:00.669823  214963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.key ...
	I1126 20:50:00.669860  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.key: {Name:mk0e96a9c7c793aab9d7251469212c3f09bb2a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:00.670104  214963 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key.319cfcc4
	I1126 20:50:00.670155  214963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt.319cfcc4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1126 20:50:00.746683  214963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt.319cfcc4 ...
	I1126 20:50:00.746821  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt.319cfcc4: {Name:mk83622506fcd15de608147d8bba410f3c71f30f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:00.746986  214963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key.319cfcc4 ...
	I1126 20:50:00.747025  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key.319cfcc4: {Name:mka925bc5d8b94d5f0457184948a6b2348c292c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:00.747135  214963 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt.319cfcc4 -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt
	I1126 20:50:00.747256  214963 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key.319cfcc4 -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key
	I1126 20:50:00.747355  214963 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.key
	I1126 20:50:00.747402  214963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.crt with IP's: []
	I1126 20:50:01.128643  214963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.crt ...
	I1126 20:50:01.128720  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.crt: {Name:mk3b8db5761eb1c0869bc560526d475f1eb7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:01.128950  214963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.key ...
	I1126 20:50:01.129006  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.key: {Name:mk453e33a7bb657477f9975c93ee96c9cf598cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:01.129311  214963 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:50:01.129383  214963 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:50:01.129408  214963 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:50:01.129477  214963 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:50:01.129536  214963 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:50:01.129588  214963 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:50:01.129672  214963 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:50:01.130384  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:50:01.159113  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:50:01.195105  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:50:01.221836  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:50:01.253275  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1126 20:50:01.280101  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:50:01.315202  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:50:01.339816  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:50:01.365706  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:50:01.394439  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:50:01.423245  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:50:01.451953  214963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:50:01.502720  214963 ssh_runner.go:195] Run: openssl version
	I1126 20:50:01.523604  214963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:50:01.543478  214963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:50:01.550041  214963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:50:01.550167  214963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:50:01.599485  214963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:50:01.608618  214963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:50:01.619419  214963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:50:01.624227  214963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:50:01.624307  214963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:50:01.673569  214963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:50:01.682980  214963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:50:01.693107  214963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:50:01.699813  214963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:50:01.699892  214963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:50:01.744877  214963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:50:01.755498  214963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:50:01.760824  214963 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:50:01.760893  214963 kubeadm.go:401] StartCluster: {Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:50:01.760973  214963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:50:01.761036  214963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:50:01.802907  214963 cri.go:89] found id: ""
	I1126 20:50:01.802981  214963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:50:01.818085  214963 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:50:01.826903  214963 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:50:01.826979  214963 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:50:01.838042  214963 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:50:01.838063  214963 kubeadm.go:158] found existing configuration files:
	
	I1126 20:50:01.838116  214963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:50:01.847878  214963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:50:01.847959  214963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:50:01.856117  214963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:50:01.867182  214963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:50:01.867267  214963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:50:01.877043  214963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:50:01.886608  214963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:50:01.886681  214963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:50:01.895901  214963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:50:01.904893  214963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:50:01.904966  214963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:50:01.913427  214963 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:50:01.966669  214963 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 20:50:01.967094  214963 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:50:02.012730  214963 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:50:02.012842  214963 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1126 20:50:02.012895  214963 kubeadm.go:319] OS: Linux
	I1126 20:50:02.012949  214963 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:50:02.013004  214963 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1126 20:50:02.013065  214963 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:50:02.013119  214963 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:50:02.013171  214963 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:50:02.013224  214963 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:50:02.013273  214963 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:50:02.013326  214963 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:50:02.013374  214963 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1126 20:50:02.097316  214963 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:50:02.097432  214963 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:50:02.097529  214963 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 20:50:02.106357  214963 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1126 20:50:01.316533  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:03.813339  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:50:02.114650  214963 out.go:252]   - Generating certificates and keys ...
	I1126 20:50:02.114750  214963 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:50:02.114823  214963 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:50:02.403423  214963 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:50:02.784300  214963 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:50:03.158332  214963 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:50:03.696982  214963 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:50:04.129173  214963 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:50:04.130695  214963 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-616586 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1126 20:50:04.464776  214963 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:50:04.465395  214963 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-616586 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1126 20:50:05.669642  214963 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:50:05.982488  214963 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:50:06.334111  214963 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:50:06.334627  214963 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:50:07.632244  214963 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:50:08.243013  214963 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 20:50:08.653407  214963 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:50:09.223749  214963 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:50:09.506260  214963 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:50:09.506417  214963 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:50:09.517340  214963 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1126 20:50:05.813533  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:08.313524  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:50:09.526130  214963 out.go:252]   - Booting up control plane ...
	I1126 20:50:09.526300  214963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:50:09.531115  214963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:50:09.531201  214963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:50:09.568360  214963 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:50:09.568682  214963 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 20:50:09.577330  214963 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 20:50:09.577641  214963 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:50:09.577847  214963 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:50:09.758768  214963 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 20:50:09.758972  214963 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1126 20:50:10.314502  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:12.819736  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:50:10.759946  214963 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001282424s
	I1126 20:50:10.770267  214963 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 20:50:10.770473  214963 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1126 20:50:10.770602  214963 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 20:50:10.770694  214963 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 20:50:14.045220  214963 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.281388239s
	I1126 20:50:15.734204  214963 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.970744415s
	I1126 20:50:17.266066  214963 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502497326s
	I1126 20:50:17.290060  214963 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:50:17.314575  214963 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:50:17.326808  214963 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:50:17.327022  214963 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-616586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:50:17.341455  214963 kubeadm.go:319] [bootstrap-token] Using token: fhaqlq.94cikrh91bquxnf5
	I1126 20:50:17.344369  214963 out.go:252]   - Configuring RBAC rules ...
	I1126 20:50:17.344503  214963 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:50:17.353764  214963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:50:17.362306  214963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:50:17.366773  214963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:50:17.374177  214963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:50:17.378371  214963 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:50:17.675450  214963 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:50:18.131603  214963 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:50:18.675758  214963 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:50:18.677277  214963 kubeadm.go:319] 
	I1126 20:50:18.677375  214963 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:50:18.677388  214963 kubeadm.go:319] 
	I1126 20:50:18.677466  214963 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:50:18.677492  214963 kubeadm.go:319] 
	I1126 20:50:18.677542  214963 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:50:18.677607  214963 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:50:18.677665  214963 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:50:18.677681  214963 kubeadm.go:319] 
	I1126 20:50:18.677741  214963 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:50:18.677746  214963 kubeadm.go:319] 
	I1126 20:50:18.677795  214963 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:50:18.677799  214963 kubeadm.go:319] 
	I1126 20:50:18.677851  214963 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:50:18.677984  214963 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:50:18.678070  214963 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:50:18.678084  214963 kubeadm.go:319] 
	I1126 20:50:18.678171  214963 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:50:18.678274  214963 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:50:18.678282  214963 kubeadm.go:319] 
	I1126 20:50:18.678375  214963 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fhaqlq.94cikrh91bquxnf5 \
	I1126 20:50:18.678492  214963 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a \
	I1126 20:50:18.678538  214963 kubeadm.go:319] 	--control-plane 
	I1126 20:50:18.678543  214963 kubeadm.go:319] 
	I1126 20:50:18.678645  214963 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:50:18.678694  214963 kubeadm.go:319] 
	I1126 20:50:18.678804  214963 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fhaqlq.94cikrh91bquxnf5 \
	I1126 20:50:18.678953  214963 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a 
	I1126 20:50:18.683217  214963 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1126 20:50:18.683446  214963 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1126 20:50:18.683556  214963 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 20:50:18.683579  214963 cni.go:84] Creating CNI manager for ""
	I1126 20:50:18.683590  214963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:50:18.686815  214963 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1126 20:50:15.312459  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:17.312812  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:19.312925  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:50:18.689651  214963 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 20:50:18.696173  214963 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 20:50:18.696236  214963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 20:50:18.718569  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:50:19.483434  214963 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:50:19.483566  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:19.483647  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-616586 minikube.k8s.io/updated_at=2025_11_26T20_50_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=embed-certs-616586 minikube.k8s.io/primary=true
	I1126 20:50:19.639322  214963 ops.go:34] apiserver oom_adj: -16
	I1126 20:50:19.645164  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:20.145546  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:20.646111  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:21.145650  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:21.645817  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:22.145273  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:22.646137  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:23.145556  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:23.244879  214963 kubeadm.go:1114] duration metric: took 3.761355778s to wait for elevateKubeSystemPrivileges
	I1126 20:50:23.244915  214963 kubeadm.go:403] duration metric: took 21.484020828s to StartCluster
	I1126 20:50:23.244934  214963 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:23.245002  214963 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:50:23.246314  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:23.246566  214963 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:50:23.246676  214963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:50:23.246911  214963 config.go:182] Loaded profile config "embed-certs-616586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:50:23.246959  214963 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:50:23.247023  214963 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-616586"
	I1126 20:50:23.247038  214963 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-616586"
	I1126 20:50:23.247067  214963 host.go:66] Checking if "embed-certs-616586" exists ...
	I1126 20:50:23.247584  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:50:23.248216  214963 addons.go:70] Setting default-storageclass=true in profile "embed-certs-616586"
	I1126 20:50:23.248239  214963 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-616586"
	I1126 20:50:23.248513  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:50:23.250913  214963 out.go:179] * Verifying Kubernetes components...
	I1126 20:50:23.254937  214963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:50:23.282096  214963 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:50:23.284993  214963 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:50:23.285016  214963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:50:23.285081  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:50:23.293556  214963 addons.go:239] Setting addon default-storageclass=true in "embed-certs-616586"
	I1126 20:50:23.293594  214963 host.go:66] Checking if "embed-certs-616586" exists ...
	I1126 20:50:23.294309  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:50:23.325768  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:50:23.339602  214963 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:50:23.339624  214963 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:50:23.339687  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:50:23.369979  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:50:23.639340  214963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:50:23.647547  214963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:50:23.650762  214963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:50:23.650878  214963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:50:24.602499  214963 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1126 20:50:24.603311  214963 node_ready.go:35] waiting up to 6m0s for node "embed-certs-616586" to be "Ready" ...
	I1126 20:50:24.656251  214963 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1126 20:50:21.812110  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:23.812216  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:50:24.659038  214963 addons.go:530] duration metric: took 1.412074022s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 20:50:25.107708  214963 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-616586" context rescaled to 1 replicas
	W1126 20:50:25.812754  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:28.312728  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:26.606318  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:28.607259  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:30.313004  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:50:32.313619  212927 pod_ready.go:94] pod "coredns-66bc5c9577-4z56c" is "Ready"
	I1126 20:50:32.313648  212927 pod_ready.go:86] duration metric: took 37.506488802s for pod "coredns-66bc5c9577-4z56c" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.316988  212927 pod_ready.go:83] waiting for pod "etcd-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.321819  212927 pod_ready.go:94] pod "etcd-no-preload-956694" is "Ready"
	I1126 20:50:32.321847  212927 pod_ready.go:86] duration metric: took 4.830935ms for pod "etcd-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.325248  212927 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.330179  212927 pod_ready.go:94] pod "kube-apiserver-no-preload-956694" is "Ready"
	I1126 20:50:32.330207  212927 pod_ready.go:86] duration metric: took 4.930017ms for pod "kube-apiserver-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.332576  212927 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.511770  212927 pod_ready.go:94] pod "kube-controller-manager-no-preload-956694" is "Ready"
	I1126 20:50:32.511807  212927 pod_ready.go:86] duration metric: took 179.202247ms for pod "kube-controller-manager-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.711624  212927 pod_ready.go:83] waiting for pod "kube-proxy-2j4dg" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:33.111211  212927 pod_ready.go:94] pod "kube-proxy-2j4dg" is "Ready"
	I1126 20:50:33.111240  212927 pod_ready.go:86] duration metric: took 399.589365ms for pod "kube-proxy-2j4dg" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:33.311566  212927 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:33.710664  212927 pod_ready.go:94] pod "kube-scheduler-no-preload-956694" is "Ready"
	I1126 20:50:33.710757  212927 pod_ready.go:86] duration metric: took 399.162584ms for pod "kube-scheduler-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:33.710788  212927 pod_ready.go:40] duration metric: took 38.907115315s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:50:33.768957  212927 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 20:50:33.773897  212927 out.go:179] * Done! kubectl is now configured to use "no-preload-956694" cluster and "default" namespace by default
	W1126 20:50:31.106298  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:33.606117  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:35.606207  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:37.606414  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:40.106509  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:42.107361  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:44.606475  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.271176855Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.274670829Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.274701663Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.274721913Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.27782897Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.277859648Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.277882104Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.281974223Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.282009348Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.28203402Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.285040411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.285070794Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.743893914Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=213f6b28-4145-40a1-9743-90802784c8d8 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.745469481Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=65f7d17e-9d59-43ab-a045-20c23be603ce name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.746594362Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q/dashboard-metrics-scraper" id=477f6cc4-2f6c-4573-857f-01b93a85f53c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.746695422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.754475724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.755206582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.776682792Z" level=info msg="Created container 29aceaa82429db92b12b0fa7cd1c23589c67124c5ba0a8f019d64c3035e55cf4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q/dashboard-metrics-scraper" id=477f6cc4-2f6c-4573-857f-01b93a85f53c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.779291952Z" level=info msg="Starting container: 29aceaa82429db92b12b0fa7cd1c23589c67124c5ba0a8f019d64c3035e55cf4" id=d0a43b1f-c18c-4d03-b66c-7e858eed365b name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.78136672Z" level=info msg="Started container" PID=1722 containerID=29aceaa82429db92b12b0fa7cd1c23589c67124c5ba0a8f019d64c3035e55cf4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q/dashboard-metrics-scraper id=d0a43b1f-c18c-4d03-b66c-7e858eed365b name=/runtime.v1.RuntimeService/StartContainer sandboxID=83512babed2bf644e30f537effb6871e91e1e8f5e6cb3c1dc4c996f755f23066
	Nov 26 20:50:41 no-preload-956694 conmon[1719]: conmon 29aceaa82429db92b12b <ninfo>: container 1722 exited with status 1
	Nov 26 20:50:42 no-preload-956694 crio[657]: time="2025-11-26T20:50:42.121646869Z" level=info msg="Removing container: e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83" id=29f0de8c-f378-44c4-845e-273d5c21bc02 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:50:42 no-preload-956694 crio[657]: time="2025-11-26T20:50:42.187846711Z" level=info msg="Error loading conmon cgroup of container e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83: cgroup deleted" id=29f0de8c-f378-44c4-845e-273d5c21bc02 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:50:42 no-preload-956694 crio[657]: time="2025-11-26T20:50:42.207667865Z" level=info msg="Removed container e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q/dashboard-metrics-scraper" id=29f0de8c-f378-44c4-845e-273d5c21bc02 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	29aceaa82429d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago        Exited              dashboard-metrics-scraper   3                   83512babed2bf       dashboard-metrics-scraper-6ffb444bf9-jk74q   kubernetes-dashboard
	6a0c09bf8b235       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   1b1a416309b73       storage-provisioner                          kube-system
	ac76226123cfd       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   23c69956ad523       kubernetes-dashboard-855c9754f9-f79rr        kubernetes-dashboard
	0554e6955b891       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   67b102b9c3876       coredns-66bc5c9577-4z56c                     kube-system
	524f8264faaa1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   79ce7f2599553       busybox                                      default
	9367fa09811bc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   69ad0c5761aee       kindnet-dfdbx                                kube-system
	39dbe8551a738       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   bfa318e4f0fac       kube-proxy-2j4dg                             kube-system
	fe095a7725bd2       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           54 seconds ago       Exited              storage-provisioner         1                   1b1a416309b73       storage-provisioner                          kube-system
	64bf641df6328       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   88eb349d9ffe2       kube-scheduler-no-preload-956694             kube-system
	69bdaac7802d2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   8e16cc2ba0274       etcd-no-preload-956694                       kube-system
	166f0bf71ff63       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   5e1c7e20921d3       kube-apiserver-no-preload-956694             kube-system
	732f8dd674b2a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   b7ecdd70219df       kube-controller-manager-no-preload-956694    kube-system
	
	
	==> coredns [0554e6955b891b84949248d4dd7484a05d62ffe5fb5cc50417b0300d8db3c64e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55523 - 31262 "HINFO IN 8004151322408716222.6502235733554110869. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023671659s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-956694
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-956694
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=no-preload-956694
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_48_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:48:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-956694
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:50:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:50:23 +0000   Wed, 26 Nov 2025 20:48:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:50:23 +0000   Wed, 26 Nov 2025 20:48:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:50:23 +0000   Wed, 26 Nov 2025 20:48:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:50:23 +0000   Wed, 26 Nov 2025 20:49:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-956694
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                ca0edc11-ec05-4f09-ac60-84d8767e18da
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-4z56c                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     113s
	  kube-system                 etcd-no-preload-956694                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-dfdbx                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-956694              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-956694     200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-2j4dg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-956694              100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jk74q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f79rr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 112s                 kube-proxy       
	  Normal   Starting                 54s                  kube-proxy       
	  Warning  CgroupV1                 2m5s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node no-preload-956694 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node no-preload-956694 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node no-preload-956694 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  118s                 kubelet          Node no-preload-956694 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 118s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    118s                 kubelet          Node no-preload-956694 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     118s                 kubelet          Node no-preload-956694 status is now: NodeHasSufficientPID
	  Normal   Starting                 118s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           114s                 node-controller  Node no-preload-956694 event: Registered Node no-preload-956694 in Controller
	  Normal   NodeReady                100s                 kubelet          Node no-preload-956694 status is now: NodeReady
	  Normal   Starting                 65s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)    kubelet          Node no-preload-956694 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)    kubelet          Node no-preload-956694 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)    kubelet          Node no-preload-956694 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                  node-controller  Node no-preload-956694 event: Registered Node no-preload-956694 in Controller
	
	
	==> dmesg <==
	[Nov26 20:23] overlayfs: idmapped layers are currently not supported
	[Nov26 20:24] overlayfs: idmapped layers are currently not supported
	[Nov26 20:25] overlayfs: idmapped layers are currently not supported
	[Nov26 20:27] overlayfs: idmapped layers are currently not supported
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	[Nov26 20:48] overlayfs: idmapped layers are currently not supported
	[Nov26 20:49] overlayfs: idmapped layers are currently not supported
	[Nov26 20:50] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [69bdaac7802d27e42ed29500b6c8549fd05c61287e8c9653748bb2accdeae2e1] <==
	{"level":"warn","ts":"2025-11-26T20:49:50.224418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.258130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.284997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.322099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.372177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.399086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.428996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.464864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.475077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.522104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.528749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.551183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.572719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.597641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.610601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.637174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.655256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.672603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.689490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.714396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.736139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.775134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.830350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.855061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.969565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43488","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:50:48 up  1:32,  0 user,  load average: 3.60, 3.17, 2.51
	Linux no-preload-956694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9367fa09811bc7824710f65db213810a28f4a5b2e9e228aec215eff41118f2d9] <==
	I1126 20:49:54.129127       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:49:54.129339       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:49:54.129447       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:49:54.129458       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:49:54.129468       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:49:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:49:54.328498       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:49:54.328525       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:49:54.328534       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:49:54.328842       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:50:24.261296       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:50:24.328968       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:50:24.329161       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:50:24.329283       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1126 20:50:25.828698       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:50:25.828734       1 metrics.go:72] Registering metrics
	I1126 20:50:25.828784       1 controller.go:711] "Syncing nftables rules"
	I1126 20:50:34.262255       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:50:34.262312       1 main.go:301] handling current node
	I1126 20:50:44.261909       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:50:44.261970       1 main.go:301] handling current node
	
	
	==> kube-apiserver [166f0bf71ff637391d7021779d2e2a5d27dea53b2e94af5da7c6556cf939eefc] <==
	I1126 20:49:52.350745       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1126 20:49:52.354972       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1126 20:49:52.355046       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:49:52.355319       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:49:52.378217       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1126 20:49:52.378266       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1126 20:49:52.379092       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:49:52.401157       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1126 20:49:52.429024       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:49:52.438840       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:49:52.438866       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:49:52.438873       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:49:52.438893       1 cache.go:39] Caches are synced for autoregister controller
	E1126 20:49:52.615777       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:49:52.971753       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:49:53.137502       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:49:54.472312       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:49:54.514871       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:49:54.559304       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:49:54.575817       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:49:54.665907       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.90.118"}
	I1126 20:49:54.707837       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.16.56"}
	I1126 20:49:56.864257       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:49:56.966871       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:49:57.017165       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [732f8dd674b2a79542d6b5db5ae656af930d6da79a225d1e0dbcfdec933c1b97] <==
	I1126 20:49:56.432881       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:49:56.434167       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:49:56.436302       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:49:56.437236       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:49:56.437317       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:49:56.445229       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 20:49:56.445254       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:49:56.447530       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:49:56.445269       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 20:49:56.451483       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1126 20:49:56.453510       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1126 20:49:56.456323       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:49:56.456501       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 20:49:56.457316       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:49:56.457332       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:49:56.459681       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:49:56.458913       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 20:49:56.464948       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1126 20:49:56.466158       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:49:56.466805       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 20:49:56.474757       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:49:56.506665       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:49:56.506757       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:49:56.506789       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:49:56.531747       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [39dbe8551a73859abffe10915c3f3e6c1fd1869e9b974e6953b486b1a5d2578d] <==
	I1126 20:49:54.204125       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:49:54.343054       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:49:54.452697       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:49:54.452735       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1126 20:49:54.452839       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:49:54.528615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:49:54.528667       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:49:54.562167       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:49:54.562755       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:49:54.563008       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:49:54.565239       1 config.go:200] "Starting service config controller"
	I1126 20:49:54.570344       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:49:54.573696       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:49:54.576810       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:49:54.576973       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:49:54.577017       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:49:54.586867       1 config.go:309] "Starting node config controller"
	I1126 20:49:54.591100       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:49:54.591406       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:49:54.675008       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:49:54.677275       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:49:54.677314       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [64bf641df6328d766e26a8b3d40eb3a629a1b6d5034073ad5e5eacc3049b071b] <==
	I1126 20:49:50.318713       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:49:52.205122       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:49:52.205152       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:49:52.205162       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:49:52.205180       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:49:52.591225       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:49:52.591253       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:49:52.608888       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:49:52.609568       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:49:52.612058       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:49:52.612351       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:49:52.710097       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:49:57 no-preload-956694 kubelet[778]: W1126 20:49:57.535913     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/crio-23c69956ad523eff3ccfa7b73af3797ce1495572840d5f3cec267618b8a0f42e WatchSource:0}: Error finding container 23c69956ad523eff3ccfa7b73af3797ce1495572840d5f3cec267618b8a0f42e: Status 404 returned error can't find the container with id 23c69956ad523eff3ccfa7b73af3797ce1495572840d5f3cec267618b8a0f42e
	Nov 26 20:50:02 no-preload-956694 kubelet[778]: I1126 20:50:02.159054     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 26 20:50:03 no-preload-956694 kubelet[778]: I1126 20:50:03.978431     778 scope.go:117] "RemoveContainer" containerID="a7666dc44e48e4adc5ca904c88567f71c5adf39895c805f965f37a15c6fc8c9d"
	Nov 26 20:50:04 no-preload-956694 kubelet[778]: I1126 20:50:04.984789     778 scope.go:117] "RemoveContainer" containerID="a7666dc44e48e4adc5ca904c88567f71c5adf39895c805f965f37a15c6fc8c9d"
	Nov 26 20:50:04 no-preload-956694 kubelet[778]: I1126 20:50:04.985103     778 scope.go:117] "RemoveContainer" containerID="3e5d7bc87da7fc300c53d05db4e764ac484d78646476776b98cc3b1fb9e9361b"
	Nov 26 20:50:04 no-preload-956694 kubelet[778]: E1126 20:50:04.985268     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jk74q_kubernetes-dashboard(f5569895-2ab9-4e89-af25-c9702a514f87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q" podUID="f5569895-2ab9-4e89-af25-c9702a514f87"
	Nov 26 20:50:05 no-preload-956694 kubelet[778]: I1126 20:50:05.989799     778 scope.go:117] "RemoveContainer" containerID="3e5d7bc87da7fc300c53d05db4e764ac484d78646476776b98cc3b1fb9e9361b"
	Nov 26 20:50:05 no-preload-956694 kubelet[778]: E1126 20:50:05.994865     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jk74q_kubernetes-dashboard(f5569895-2ab9-4e89-af25-c9702a514f87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q" podUID="f5569895-2ab9-4e89-af25-c9702a514f87"
	Nov 26 20:50:07 no-preload-956694 kubelet[778]: I1126 20:50:07.483311     778 scope.go:117] "RemoveContainer" containerID="3e5d7bc87da7fc300c53d05db4e764ac484d78646476776b98cc3b1fb9e9361b"
	Nov 26 20:50:07 no-preload-956694 kubelet[778]: E1126 20:50:07.483482     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jk74q_kubernetes-dashboard(f5569895-2ab9-4e89-af25-c9702a514f87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q" podUID="f5569895-2ab9-4e89-af25-c9702a514f87"
	Nov 26 20:50:20 no-preload-956694 kubelet[778]: I1126 20:50:20.743306     778 scope.go:117] "RemoveContainer" containerID="3e5d7bc87da7fc300c53d05db4e764ac484d78646476776b98cc3b1fb9e9361b"
	Nov 26 20:50:21 no-preload-956694 kubelet[778]: I1126 20:50:21.035569     778 scope.go:117] "RemoveContainer" containerID="3e5d7bc87da7fc300c53d05db4e764ac484d78646476776b98cc3b1fb9e9361b"
	Nov 26 20:50:21 no-preload-956694 kubelet[778]: I1126 20:50:21.035867     778 scope.go:117] "RemoveContainer" containerID="e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83"
	Nov 26 20:50:21 no-preload-956694 kubelet[778]: E1126 20:50:21.036064     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jk74q_kubernetes-dashboard(f5569895-2ab9-4e89-af25-c9702a514f87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q" podUID="f5569895-2ab9-4e89-af25-c9702a514f87"
	Nov 26 20:50:21 no-preload-956694 kubelet[778]: I1126 20:50:21.060553     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f79rr" podStartSLOduration=11.366549392 podStartE2EDuration="24.060465986s" podCreationTimestamp="2025-11-26 20:49:57 +0000 UTC" firstStartedPulling="2025-11-26 20:49:57.543592816 +0000 UTC m=+14.054369669" lastFinishedPulling="2025-11-26 20:50:10.237509418 +0000 UTC m=+26.748286263" observedRunningTime="2025-11-26 20:50:11.02261951 +0000 UTC m=+27.533396372" watchObservedRunningTime="2025-11-26 20:50:21.060465986 +0000 UTC m=+37.571242839"
	Nov 26 20:50:24 no-preload-956694 kubelet[778]: I1126 20:50:24.047507     778 scope.go:117] "RemoveContainer" containerID="fe095a7725bd274ab36ace78665c689e31b9870d45c3f58f42466f2b19ca1bac"
	Nov 26 20:50:27 no-preload-956694 kubelet[778]: I1126 20:50:27.483757     778 scope.go:117] "RemoveContainer" containerID="e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83"
	Nov 26 20:50:27 no-preload-956694 kubelet[778]: E1126 20:50:27.484424     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jk74q_kubernetes-dashboard(f5569895-2ab9-4e89-af25-c9702a514f87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q" podUID="f5569895-2ab9-4e89-af25-c9702a514f87"
	Nov 26 20:50:41 no-preload-956694 kubelet[778]: I1126 20:50:41.743150     778 scope.go:117] "RemoveContainer" containerID="e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83"
	Nov 26 20:50:42 no-preload-956694 kubelet[778]: I1126 20:50:42.112113     778 scope.go:117] "RemoveContainer" containerID="e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83"
	Nov 26 20:50:42 no-preload-956694 kubelet[778]: I1126 20:50:42.116851     778 scope.go:117] "RemoveContainer" containerID="29aceaa82429db92b12b0fa7cd1c23589c67124c5ba0a8f019d64c3035e55cf4"
	Nov 26 20:50:42 no-preload-956694 kubelet[778]: E1126 20:50:42.117235     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jk74q_kubernetes-dashboard(f5569895-2ab9-4e89-af25-c9702a514f87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q" podUID="f5569895-2ab9-4e89-af25-c9702a514f87"
	Nov 26 20:50:46 no-preload-956694 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:50:46 no-preload-956694 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:50:46 no-preload-956694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ac76226123cfd439ff139dd22351505e7daee346df3e1e19dcb7f7d973283462] <==
	2025/11/26 20:50:10 Using namespace: kubernetes-dashboard
	2025/11/26 20:50:10 Using in-cluster config to connect to apiserver
	2025/11/26 20:50:10 Using secret token for csrf signing
	2025/11/26 20:50:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:50:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:50:10 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 20:50:10 Generating JWE encryption key
	2025/11/26 20:50:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:50:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:50:10 Initializing JWE encryption key from synchronized object
	2025/11/26 20:50:10 Creating in-cluster Sidecar client
	2025/11/26 20:50:10 Serving insecurely on HTTP port: 9090
	2025/11/26 20:50:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:50:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:50:10 Starting overwatch
	
	
	==> storage-provisioner [6a0c09bf8b235fc0d759a84c8b8fdceafe61508be112ec1cf5b51a0d6b389fa7] <==
	I1126 20:50:24.154055       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:50:24.193371       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:50:24.193482       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:50:24.198207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:27.653354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:31.913987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:35.512644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:38.566211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:41.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:41.593337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:50:41.593581       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:50:41.593781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-956694_39ef9c9d-7e87-4b0e-850f-3286c711d3bb!
	I1126 20:50:41.594306       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa3c6d99-6069-4dc6-b561-d2344160065e", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-956694_39ef9c9d-7e87-4b0e-850f-3286c711d3bb became leader
	W1126 20:50:41.598316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:41.607748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:50:41.694621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-956694_39ef9c9d-7e87-4b0e-850f-3286c711d3bb!
	W1126 20:50:43.611325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:43.618340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:45.623938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:45.630860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:47.635067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:47.642982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fe095a7725bd274ab36ace78665c689e31b9870d45c3f58f42466f2b19ca1bac] <==
	I1126 20:49:53.734009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:50:23.739386       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-956694 -n no-preload-956694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-956694 -n no-preload-956694: exit status 2 (365.414669ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-956694 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-956694
helpers_test.go:243: (dbg) docker inspect no-preload-956694:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b",
	        "Created": "2025-11-26T20:48:11.257955221Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213100,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:49:35.316478343Z",
	            "FinishedAt": "2025-11-26T20:49:34.266543458Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/hostname",
	        "HostsPath": "/var/lib/docker/containers/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/hosts",
	        "LogPath": "/var/lib/docker/containers/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b-json.log",
	        "Name": "/no-preload-956694",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-956694:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-956694",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b",
	                "LowerDir": "/var/lib/docker/overlay2/0080b323bab4635def865bc48fab6d44d62fded9322f96dda189563e0aed4165-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0080b323bab4635def865bc48fab6d44d62fded9322f96dda189563e0aed4165/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0080b323bab4635def865bc48fab6d44d62fded9322f96dda189563e0aed4165/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0080b323bab4635def865bc48fab6d44d62fded9322f96dda189563e0aed4165/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-956694",
	                "Source": "/var/lib/docker/volumes/no-preload-956694/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-956694",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-956694",
	                "name.minikube.sigs.k8s.io": "no-preload-956694",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d686f6f27aef486c06404574de0c4ae344714a7029d94221211ab1f31ad7896",
	            "SandboxKey": "/var/run/docker/netns/1d686f6f27ae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-956694": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:8b:4b:9d:e5:d7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "32516947827eacd2aa341e65200cd5dd0564df7db92f9b17b625c9371ac2deac",
	                    "EndpointID": "a0b2078c7e066324d5ef49c0f64bd6081628035ebd9b5da162ec361fc6bf51be",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-956694",
	                        "53e8b694caf6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-956694 -n no-preload-956694
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-956694 -n no-preload-956694: exit status 2 (360.904766ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-956694 logs -n 25
E1126 20:50:51.180874    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-956694 logs -n 25: (1.283813885s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-164741   │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ delete  │ -p force-systemd-env-274518                                                                                                                                                                                                                   │ force-systemd-env-274518 │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:44 UTC │
	│ start   │ -p cert-options-207115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-207115      │ jenkins │ v1.37.0 │ 26 Nov 25 20:44 UTC │ 26 Nov 25 20:45 UTC │
	│ ssh     │ cert-options-207115 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-207115      │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ ssh     │ -p cert-options-207115 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-207115      │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ delete  │ -p cert-options-207115                                                                                                                                                                                                                        │ cert-options-207115      │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-264537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │                     │
	│ stop    │ -p old-k8s-version-264537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-264537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:47 UTC │
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164741   │ jenkins │ v1.37.0 │ 26 Nov 25 20:47 UTC │ 26 Nov 25 20:49 UTC │
	│ image   │ old-k8s-version-264537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ pause   │ -p old-k8s-version-264537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │                     │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                                                                                     │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                                                                                     │ old-k8s-version-264537   │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable metrics-server -p no-preload-956694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │                     │
	│ stop    │ -p no-preload-956694 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable dashboard -p no-preload-956694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p cert-expiration-164741                                                                                                                                                                                                                     │ cert-expiration-164741   │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-616586       │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │                     │
	│ image   │ no-preload-956694 image list --format=json                                                                                                                                                                                                    │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ pause   │ -p no-preload-956694 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-956694        │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:49:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:49:45.715424  214963 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:49:45.715639  214963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:49:45.715668  214963 out.go:374] Setting ErrFile to fd 2...
	I1126 20:49:45.715687  214963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:49:45.715970  214963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:49:45.716418  214963 out.go:368] Setting JSON to false
	I1126 20:49:45.717415  214963 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5516,"bootTime":1764184670,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:49:45.717506  214963 start.go:143] virtualization:  
	I1126 20:49:45.721672  214963 out.go:179] * [embed-certs-616586] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:49:45.726345  214963 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:49:45.726407  214963 notify.go:221] Checking for updates...
	I1126 20:49:45.733117  214963 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:49:45.736426  214963 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:49:45.739723  214963 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:49:45.743002  214963 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:49:45.746718  214963 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:49:45.750467  214963 config.go:182] Loaded profile config "no-preload-956694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:49:45.750629  214963 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:49:45.808693  214963 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:49:45.808811  214963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:49:45.924761  214963 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-26 20:49:45.908978023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:49:45.924866  214963 docker.go:319] overlay module found
	I1126 20:49:45.928197  214963 out.go:179] * Using the docker driver based on user configuration
	I1126 20:49:45.931161  214963 start.go:309] selected driver: docker
	I1126 20:49:45.931186  214963 start.go:927] validating driver "docker" against <nil>
	I1126 20:49:45.931200  214963 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:49:45.931955  214963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:49:46.033996  214963 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-26 20:49:46.021019422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:49:46.034185  214963 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 20:49:46.034407  214963 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:49:46.037353  214963 out.go:179] * Using Docker driver with root privileges
	I1126 20:49:46.040251  214963 cni.go:84] Creating CNI manager for ""
	I1126 20:49:46.040326  214963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:49:46.040344  214963 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 20:49:46.040427  214963 start.go:353] cluster config:
	{Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:49:46.043659  214963 out.go:179] * Starting "embed-certs-616586" primary control-plane node in "embed-certs-616586" cluster
	I1126 20:49:46.046648  214963 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:49:46.049643  214963 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:49:46.052481  214963 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:49:46.052537  214963 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:49:46.052552  214963 cache.go:65] Caching tarball of preloaded images
	I1126 20:49:46.052642  214963 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:49:46.052657  214963 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:49:46.052766  214963 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/config.json ...
	I1126 20:49:46.052793  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/config.json: {Name:mkfc2b593589c46372a12f9fd8a847f9f5683e74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:49:46.052962  214963 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:49:46.077125  214963 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:49:46.077144  214963 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:49:46.077160  214963 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:49:46.077190  214963 start.go:360] acquireMachinesLock for embed-certs-616586: {Name:mka5254437f68c39e0c98d2ff47cae58581678c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:49:46.077298  214963 start.go:364] duration metric: took 93.371µs to acquireMachinesLock for "embed-certs-616586"
	I1126 20:49:46.077322  214963 start.go:93] Provisioning new machine with config: &{Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:49:46.077390  214963 start.go:125] createHost starting for "" (driver="docker")
	I1126 20:49:44.931609  212927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:49:44.981265  212927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:49:44.981565  212927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:49:45.038216  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:49:45.038246  212927 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:49:45.190496  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:49:45.190569  212927 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:49:45.326171  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:49:45.326193  212927 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:49:45.485743  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:49:45.485763  212927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:49:45.518419  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:49:45.518439  212927 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:49:45.558454  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:49:45.558480  212927 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:49:45.578627  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:49:45.578647  212927 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:49:45.596966  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:49:45.596987  212927 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:49:45.658160  212927 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:49:45.658182  212927 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:49:45.718387  212927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:49:46.080640  214963 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:49:46.080873  214963 start.go:159] libmachine.API.Create for "embed-certs-616586" (driver="docker")
	I1126 20:49:46.080913  214963 client.go:173] LocalClient.Create starting
	I1126 20:49:46.081015  214963 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem
	I1126 20:49:46.081054  214963 main.go:143] libmachine: Decoding PEM data...
	I1126 20:49:46.081073  214963 main.go:143] libmachine: Parsing certificate...
	I1126 20:49:46.081133  214963 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem
	I1126 20:49:46.081150  214963 main.go:143] libmachine: Decoding PEM data...
	I1126 20:49:46.081162  214963 main.go:143] libmachine: Parsing certificate...
	I1126 20:49:46.081547  214963 cli_runner.go:164] Run: docker network inspect embed-certs-616586 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:49:46.114345  214963 cli_runner.go:211] docker network inspect embed-certs-616586 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:49:46.114424  214963 network_create.go:284] running [docker network inspect embed-certs-616586] to gather additional debugging logs...
	I1126 20:49:46.114441  214963 cli_runner.go:164] Run: docker network inspect embed-certs-616586
	W1126 20:49:46.146105  214963 cli_runner.go:211] docker network inspect embed-certs-616586 returned with exit code 1
	I1126 20:49:46.146133  214963 network_create.go:287] error running [docker network inspect embed-certs-616586]: docker network inspect embed-certs-616586: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-616586 not found
	I1126 20:49:46.146148  214963 network_create.go:289] output of [docker network inspect embed-certs-616586]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-616586 not found
	
	** /stderr **
	I1126 20:49:46.146255  214963 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:49:46.169829  214963 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20cb65a83ad5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:26:47:2b:2e:03} reservation:<nil>}
	I1126 20:49:46.170239  214963 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-16105a7ff776 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:75:f6:9d:ad:ac} reservation:<nil>}
	I1126 20:49:46.170569  214963 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f1c69ea9dfa3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b7:bf:8a:44:80} reservation:<nil>}
	I1126 20:49:46.170805  214963 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-32516947827e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:91:1e:d5:75:89} reservation:<nil>}
	I1126 20:49:46.171194  214963 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a22060}
	I1126 20:49:46.171212  214963 network_create.go:124] attempt to create docker network embed-certs-616586 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1126 20:49:46.171265  214963 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-616586 embed-certs-616586
	I1126 20:49:46.257500  214963 network_create.go:108] docker network embed-certs-616586 192.168.85.0/24 created
	I1126 20:49:46.257529  214963 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-616586" container
	I1126 20:49:46.257604  214963 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:49:46.290159  214963 cli_runner.go:164] Run: docker volume create embed-certs-616586 --label name.minikube.sigs.k8s.io=embed-certs-616586 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:49:46.318051  214963 oci.go:103] Successfully created a docker volume embed-certs-616586
	I1126 20:49:46.318140  214963 cli_runner.go:164] Run: docker run --rm --name embed-certs-616586-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-616586 --entrypoint /usr/bin/test -v embed-certs-616586:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:49:47.028412  214963 oci.go:107] Successfully prepared a docker volume embed-certs-616586
	I1126 20:49:47.028470  214963 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:49:47.028480  214963 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:49:47.028548  214963 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-616586:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 20:49:54.698522  212927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.766875402s)
	I1126 20:49:54.698585  212927 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.717296264s)
	I1126 20:49:54.698610  212927 node_ready.go:35] waiting up to 6m0s for node "no-preload-956694" to be "Ready" ...
	I1126 20:49:54.698920  212927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.717332307s)
	I1126 20:49:54.720089  212927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.001666006s)
	I1126 20:49:54.723306  212927 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-956694 addons enable metrics-server
	
	I1126 20:49:54.738195  212927 node_ready.go:49] node "no-preload-956694" is "Ready"
	I1126 20:49:54.738225  212927 node_ready.go:38] duration metric: took 39.587714ms for node "no-preload-956694" to be "Ready" ...
	I1126 20:49:54.738238  212927 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:49:54.738292  212927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:49:54.746419  212927 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1126 20:49:54.749152  212927 addons.go:530] duration metric: took 10.284556136s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1126 20:49:54.752730  212927 api_server.go:72] duration metric: took 10.288493985s to wait for apiserver process to appear ...
	I1126 20:49:54.752753  212927 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:49:54.752771  212927 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1126 20:49:54.764603  212927 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1126 20:49:54.765825  212927 api_server.go:141] control plane version: v1.34.1
	I1126 20:49:54.765852  212927 api_server.go:131] duration metric: took 13.091823ms to wait for apiserver health ...
	I1126 20:49:54.765862  212927 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:49:54.770111  212927 system_pods.go:59] 8 kube-system pods found
	I1126 20:49:54.770152  212927 system_pods.go:61] "coredns-66bc5c9577-4z56c" [adf50d03-764a-47f2-8b7b-85682915bd69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:49:54.770161  212927 system_pods.go:61] "etcd-no-preload-956694" [30a458d9-4cc3-4efc-ac03-1c12fd3467b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:49:54.770170  212927 system_pods.go:61] "kindnet-dfdbx" [68b183f2-571b-476a-924c-7b0a22cfe302] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:49:54.770177  212927 system_pods.go:61] "kube-apiserver-no-preload-956694" [19dfb0a5-0634-42eb-b9a2-44bf5665b3ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:49:54.770185  212927 system_pods.go:61] "kube-controller-manager-no-preload-956694" [56618fe0-6b76-493c-986e-3acf20cc0c46] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:49:54.770191  212927 system_pods.go:61] "kube-proxy-2j4dg" [c799d69f-b86f-4ef0-82b2-0b4200f9164f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:49:54.770200  212927 system_pods.go:61] "kube-scheduler-no-preload-956694" [07469dd8-7c87-4bea-8dda-a24815aa6db1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:49:54.770204  212927 system_pods.go:61] "storage-provisioner" [c37b32d0-5da0-4557-91cf-d1d082be9471] Running
	I1126 20:49:54.770218  212927 system_pods.go:74] duration metric: took 4.349339ms to wait for pod list to return data ...
	I1126 20:49:54.770225  212927 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:49:54.773879  212927 default_sa.go:45] found service account: "default"
	I1126 20:49:54.773907  212927 default_sa.go:55] duration metric: took 3.671189ms for default service account to be created ...
	I1126 20:49:54.773946  212927 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:49:54.777446  212927 system_pods.go:86] 8 kube-system pods found
	I1126 20:49:54.777480  212927 system_pods.go:89] "coredns-66bc5c9577-4z56c" [adf50d03-764a-47f2-8b7b-85682915bd69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:49:54.777489  212927 system_pods.go:89] "etcd-no-preload-956694" [30a458d9-4cc3-4efc-ac03-1c12fd3467b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:49:54.777499  212927 system_pods.go:89] "kindnet-dfdbx" [68b183f2-571b-476a-924c-7b0a22cfe302] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:49:54.777506  212927 system_pods.go:89] "kube-apiserver-no-preload-956694" [19dfb0a5-0634-42eb-b9a2-44bf5665b3ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:49:54.777514  212927 system_pods.go:89] "kube-controller-manager-no-preload-956694" [56618fe0-6b76-493c-986e-3acf20cc0c46] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:49:54.777520  212927 system_pods.go:89] "kube-proxy-2j4dg" [c799d69f-b86f-4ef0-82b2-0b4200f9164f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:49:54.777527  212927 system_pods.go:89] "kube-scheduler-no-preload-956694" [07469dd8-7c87-4bea-8dda-a24815aa6db1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:49:54.777533  212927 system_pods.go:89] "storage-provisioner" [c37b32d0-5da0-4557-91cf-d1d082be9471] Running
	I1126 20:49:54.777540  212927 system_pods.go:126] duration metric: took 3.588468ms to wait for k8s-apps to be running ...
	I1126 20:49:54.777553  212927 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:49:54.777605  212927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:49:54.791301  212927 system_svc.go:56] duration metric: took 13.739812ms WaitForService to wait for kubelet
	I1126 20:49:54.791329  212927 kubeadm.go:587] duration metric: took 10.327095842s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:49:54.791348  212927 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:49:54.795450  212927 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:49:54.795487  212927 node_conditions.go:123] node cpu capacity is 2
	I1126 20:49:54.795507  212927 node_conditions.go:105] duration metric: took 4.153751ms to run NodePressure ...
	I1126 20:49:54.795522  212927 start.go:242] waiting for startup goroutines ...
	I1126 20:49:54.795546  212927 start.go:247] waiting for cluster config update ...
	I1126 20:49:54.795565  212927 start.go:256] writing updated cluster config ...
	I1126 20:49:54.795841  212927 ssh_runner.go:195] Run: rm -f paused
	I1126 20:49:54.803640  212927 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:49:54.807132  212927 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4z56c" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:49:51.667139  214963 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-616586:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.638556373s)
	I1126 20:49:51.667174  214963 kic.go:203] duration metric: took 4.638690629s to extract preloaded images to volume ...
	W1126 20:49:51.667312  214963 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1126 20:49:51.667432  214963 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:49:51.778630  214963 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-616586 --name embed-certs-616586 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-616586 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-616586 --network embed-certs-616586 --ip 192.168.85.2 --volume embed-certs-616586:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:49:52.267221  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Running}}
	I1126 20:49:52.296441  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:49:52.333432  214963 cli_runner.go:164] Run: docker exec embed-certs-616586 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:49:52.411604  214963 oci.go:144] the created container "embed-certs-616586" has a running status.
	I1126 20:49:52.411629  214963 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa...
	I1126 20:49:52.999031  214963 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:49:53.020668  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:49:53.043491  214963 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:49:53.043510  214963 kic_runner.go:114] Args: [docker exec --privileged embed-certs-616586 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:49:53.127201  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:49:53.155415  214963 machine.go:94] provisionDockerMachine start ...
	I1126 20:49:53.155586  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:53.183062  214963 main.go:143] libmachine: Using SSH client type: native
	I1126 20:49:53.183410  214963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1126 20:49:53.183424  214963 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:49:53.184214  214963 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50420->127.0.0.1:33063: read: connection reset by peer
	W1126 20:49:56.813253  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:49:58.818695  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:49:56.333671  214963 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-616586
	
	I1126 20:49:56.333696  214963 ubuntu.go:182] provisioning hostname "embed-certs-616586"
	I1126 20:49:56.333763  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:56.367344  214963 main.go:143] libmachine: Using SSH client type: native
	I1126 20:49:56.367661  214963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1126 20:49:56.367677  214963 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-616586 && echo "embed-certs-616586" | sudo tee /etc/hostname
	I1126 20:49:56.547573  214963 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-616586
	
	I1126 20:49:56.547666  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:56.568715  214963 main.go:143] libmachine: Using SSH client type: native
	I1126 20:49:56.569043  214963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1126 20:49:56.569064  214963 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-616586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-616586/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-616586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:49:56.737946  214963 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:49:56.737974  214963 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:49:56.738003  214963 ubuntu.go:190] setting up certificates
	I1126 20:49:56.738013  214963 provision.go:84] configureAuth start
	I1126 20:49:56.738069  214963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-616586
	I1126 20:49:56.755224  214963 provision.go:143] copyHostCerts
	I1126 20:49:56.755285  214963 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:49:56.755294  214963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:49:56.755368  214963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:49:56.755484  214963 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:49:56.755494  214963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:49:56.755520  214963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:49:56.755568  214963 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:49:56.755573  214963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:49:56.755597  214963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:49:56.755640  214963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.embed-certs-616586 san=[127.0.0.1 192.168.85.2 embed-certs-616586 localhost minikube]
	I1126 20:49:57.069867  214963 provision.go:177] copyRemoteCerts
	I1126 20:49:57.069981  214963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:49:57.070028  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:57.087034  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:49:57.195097  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1126 20:49:57.224471  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:49:57.245971  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1126 20:49:57.268387  214963 provision.go:87] duration metric: took 530.350086ms to configureAuth
	I1126 20:49:57.268465  214963 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:49:57.268702  214963 config.go:182] Loaded profile config "embed-certs-616586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:49:57.268900  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:57.284926  214963 main.go:143] libmachine: Using SSH client type: native
	I1126 20:49:57.285231  214963 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1126 20:49:57.285245  214963 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:49:57.605655  214963 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:49:57.605678  214963 machine.go:97] duration metric: took 4.450244674s to provisionDockerMachine
	I1126 20:49:57.605688  214963 client.go:176] duration metric: took 11.524769101s to LocalClient.Create
	I1126 20:49:57.605699  214963 start.go:167] duration metric: took 11.524828447s to libmachine.API.Create "embed-certs-616586"
	I1126 20:49:57.605706  214963 start.go:293] postStartSetup for "embed-certs-616586" (driver="docker")
	I1126 20:49:57.605716  214963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:49:57.605787  214963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:49:57.605836  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:57.623323  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:49:57.730112  214963 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:49:57.733545  214963 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:49:57.733575  214963 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:49:57.733587  214963 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:49:57.733641  214963 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:49:57.733733  214963 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:49:57.733839  214963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:49:57.741946  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:49:57.769459  214963 start.go:296] duration metric: took 163.73825ms for postStartSetup
	I1126 20:49:57.769878  214963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-616586
	I1126 20:49:57.799111  214963 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/config.json ...
	I1126 20:49:57.799412  214963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:49:57.799458  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:57.831988  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:49:57.934928  214963 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:49:57.940415  214963 start.go:128] duration metric: took 11.863009831s to createHost
	I1126 20:49:57.940441  214963 start.go:83] releasing machines lock for "embed-certs-616586", held for 11.863134381s
	I1126 20:49:57.940527  214963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-616586
	I1126 20:49:57.958692  214963 ssh_runner.go:195] Run: cat /version.json
	I1126 20:49:57.958754  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:57.958969  214963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:49:57.959028  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:49:57.981348  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:49:57.998374  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:49:58.094130  214963 ssh_runner.go:195] Run: systemctl --version
	I1126 20:49:58.205160  214963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:49:58.259501  214963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:49:58.264372  214963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:49:58.264475  214963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:49:58.294476  214963 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1126 20:49:58.294510  214963 start.go:496] detecting cgroup driver to use...
	I1126 20:49:58.294579  214963 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:49:58.294648  214963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:49:58.322624  214963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:49:58.340653  214963 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:49:58.340750  214963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:49:58.358884  214963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:49:58.380260  214963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:49:58.550548  214963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:49:58.722709  214963 docker.go:234] disabling docker service ...
	I1126 20:49:58.722820  214963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:49:58.753799  214963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:49:58.769228  214963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:49:58.955834  214963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:49:59.128685  214963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:49:59.144825  214963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:49:59.171794  214963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:49:59.171917  214963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.182678  214963 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:49:59.182796  214963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.197348  214963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.215834  214963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.225964  214963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:49:59.237851  214963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.247650  214963 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.263169  214963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:49:59.273604  214963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:49:59.282879  214963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:49:59.291619  214963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:49:59.457805  214963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:49:59.745627  214963 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:49:59.745748  214963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:49:59.749961  214963 start.go:564] Will wait 60s for crictl version
	I1126 20:49:59.750069  214963 ssh_runner.go:195] Run: which crictl
	I1126 20:49:59.754370  214963 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:49:59.791343  214963 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:49:59.791485  214963 ssh_runner.go:195] Run: crio --version
	I1126 20:49:59.835460  214963 ssh_runner.go:195] Run: crio --version
	I1126 20:49:59.873973  214963 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:49:59.877297  214963 cli_runner.go:164] Run: docker network inspect embed-certs-616586 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:49:59.902624  214963 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:49:59.906115  214963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:49:59.916943  214963 kubeadm.go:884] updating cluster {Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:49:59.917070  214963 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:49:59.917128  214963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:49:59.976239  214963 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:49:59.976259  214963 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:49:59.976313  214963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:50:00.019132  214963 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:50:00.019155  214963 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:50:00.019164  214963 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1126 20:50:00.019285  214963 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-616586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:50:00.019386  214963 ssh_runner.go:195] Run: crio config
	I1126 20:50:00.102494  214963 cni.go:84] Creating CNI manager for ""
	I1126 20:50:00.102562  214963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:50:00.102615  214963 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:50:00.102657  214963 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-616586 NodeName:embed-certs-616586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:50:00.102835  214963 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-616586"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:50:00.102937  214963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:50:00.116384  214963 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:50:00.116520  214963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:50:00.128628  214963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1126 20:50:00.153439  214963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:50:00.175351  214963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1126 20:50:00.203711  214963 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:50:00.209088  214963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:50:00.224206  214963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:50:00.505495  214963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:50:00.530979  214963 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586 for IP: 192.168.85.2
	I1126 20:50:00.531057  214963 certs.go:195] generating shared ca certs ...
	I1126 20:50:00.531089  214963 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:00.531298  214963 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:50:00.531383  214963 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:50:00.531418  214963 certs.go:257] generating profile certs ...
	I1126 20:50:00.531496  214963 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.key
	I1126 20:50:00.531533  214963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.crt with IP's: []
	I1126 20:50:00.669552  214963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.crt ...
	I1126 20:50:00.669636  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.crt: {Name:mk8f6fd090b2026e4512f84966bafebc39935caf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:00.669823  214963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.key ...
	I1126 20:50:00.669860  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.key: {Name:mk0e96a9c7c793aab9d7251469212c3f09bb2a3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:00.670104  214963 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key.319cfcc4
	I1126 20:50:00.670155  214963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt.319cfcc4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1126 20:50:00.746683  214963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt.319cfcc4 ...
	I1126 20:50:00.746821  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt.319cfcc4: {Name:mk83622506fcd15de608147d8bba410f3c71f30f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:00.746986  214963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key.319cfcc4 ...
	I1126 20:50:00.747025  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key.319cfcc4: {Name:mka925bc5d8b94d5f0457184948a6b2348c292c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:00.747135  214963 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt.319cfcc4 -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt
	I1126 20:50:00.747256  214963 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key.319cfcc4 -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key
	I1126 20:50:00.747355  214963 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.key
	I1126 20:50:00.747402  214963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.crt with IP's: []
	I1126 20:50:01.128643  214963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.crt ...
	I1126 20:50:01.128720  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.crt: {Name:mk3b8db5761eb1c0869bc560526d475f1eb7e799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:01.128950  214963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.key ...
	I1126 20:50:01.129006  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.key: {Name:mk453e33a7bb657477f9975c93ee96c9cf598cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:01.129311  214963 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:50:01.129383  214963 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:50:01.129408  214963 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:50:01.129477  214963 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:50:01.129536  214963 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:50:01.129588  214963 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:50:01.129672  214963 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:50:01.130384  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:50:01.159113  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:50:01.195105  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:50:01.221836  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:50:01.253275  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1126 20:50:01.280101  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:50:01.315202  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:50:01.339816  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:50:01.365706  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:50:01.394439  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:50:01.423245  214963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:50:01.451953  214963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:50:01.502720  214963 ssh_runner.go:195] Run: openssl version
	I1126 20:50:01.523604  214963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:50:01.543478  214963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:50:01.550041  214963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:50:01.550167  214963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:50:01.599485  214963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:50:01.608618  214963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:50:01.619419  214963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:50:01.624227  214963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:50:01.624307  214963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:50:01.673569  214963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:50:01.682980  214963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:50:01.693107  214963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:50:01.699813  214963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:50:01.699892  214963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:50:01.744877  214963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:50:01.755498  214963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:50:01.760824  214963 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:50:01.760893  214963 kubeadm.go:401] StartCluster: {Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:50:01.760973  214963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:50:01.761036  214963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:50:01.802907  214963 cri.go:89] found id: ""
	I1126 20:50:01.802981  214963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:50:01.818085  214963 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:50:01.826903  214963 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:50:01.826979  214963 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:50:01.838042  214963 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:50:01.838063  214963 kubeadm.go:158] found existing configuration files:
	
	I1126 20:50:01.838116  214963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:50:01.847878  214963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:50:01.847959  214963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:50:01.856117  214963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:50:01.867182  214963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:50:01.867267  214963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:50:01.877043  214963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:50:01.886608  214963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:50:01.886681  214963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:50:01.895901  214963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:50:01.904893  214963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:50:01.904966  214963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:50:01.913427  214963 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:50:01.966669  214963 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 20:50:01.967094  214963 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:50:02.012730  214963 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:50:02.012842  214963 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1126 20:50:02.012895  214963 kubeadm.go:319] OS: Linux
	I1126 20:50:02.012949  214963 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:50:02.013004  214963 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1126 20:50:02.013065  214963 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:50:02.013119  214963 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:50:02.013171  214963 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:50:02.013224  214963 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:50:02.013273  214963 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:50:02.013326  214963 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:50:02.013374  214963 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1126 20:50:02.097316  214963 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:50:02.097432  214963 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:50:02.097529  214963 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 20:50:02.106357  214963 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1126 20:50:01.316533  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:03.813339  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:50:02.114650  214963 out.go:252]   - Generating certificates and keys ...
	I1126 20:50:02.114750  214963 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:50:02.114823  214963 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:50:02.403423  214963 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:50:02.784300  214963 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:50:03.158332  214963 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:50:03.696982  214963 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:50:04.129173  214963 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:50:04.130695  214963 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-616586 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1126 20:50:04.464776  214963 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:50:04.465395  214963 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-616586 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1126 20:50:05.669642  214963 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:50:05.982488  214963 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:50:06.334111  214963 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:50:06.334627  214963 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:50:07.632244  214963 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:50:08.243013  214963 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 20:50:08.653407  214963 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:50:09.223749  214963 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:50:09.506260  214963 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:50:09.506417  214963 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:50:09.517340  214963 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1126 20:50:05.813533  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:08.313524  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:50:09.526130  214963 out.go:252]   - Booting up control plane ...
	I1126 20:50:09.526300  214963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:50:09.531115  214963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:50:09.531201  214963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:50:09.568360  214963 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:50:09.568682  214963 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 20:50:09.577330  214963 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 20:50:09.577641  214963 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:50:09.577847  214963 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:50:09.758768  214963 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 20:50:09.758972  214963 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1126 20:50:10.314502  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:12.819736  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:50:10.759946  214963 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001282424s
	I1126 20:50:10.770267  214963 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 20:50:10.770473  214963 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1126 20:50:10.770602  214963 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 20:50:10.770694  214963 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 20:50:14.045220  214963 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.281388239s
	I1126 20:50:15.734204  214963 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.970744415s
	I1126 20:50:17.266066  214963 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502497326s
	I1126 20:50:17.290060  214963 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:50:17.314575  214963 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:50:17.326808  214963 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:50:17.327022  214963 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-616586 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:50:17.341455  214963 kubeadm.go:319] [bootstrap-token] Using token: fhaqlq.94cikrh91bquxnf5
	I1126 20:50:17.344369  214963 out.go:252]   - Configuring RBAC rules ...
	I1126 20:50:17.344503  214963 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:50:17.353764  214963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:50:17.362306  214963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:50:17.366773  214963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:50:17.374177  214963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:50:17.378371  214963 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:50:17.675450  214963 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:50:18.131603  214963 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:50:18.675758  214963 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:50:18.677277  214963 kubeadm.go:319] 
	I1126 20:50:18.677375  214963 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:50:18.677388  214963 kubeadm.go:319] 
	I1126 20:50:18.677466  214963 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:50:18.677492  214963 kubeadm.go:319] 
	I1126 20:50:18.677542  214963 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:50:18.677607  214963 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:50:18.677665  214963 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:50:18.677681  214963 kubeadm.go:319] 
	I1126 20:50:18.677741  214963 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:50:18.677746  214963 kubeadm.go:319] 
	I1126 20:50:18.677795  214963 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:50:18.677799  214963 kubeadm.go:319] 
	I1126 20:50:18.677851  214963 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:50:18.677984  214963 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:50:18.678070  214963 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:50:18.678084  214963 kubeadm.go:319] 
	I1126 20:50:18.678171  214963 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:50:18.678274  214963 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:50:18.678282  214963 kubeadm.go:319] 
	I1126 20:50:18.678375  214963 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fhaqlq.94cikrh91bquxnf5 \
	I1126 20:50:18.678492  214963 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a \
	I1126 20:50:18.678538  214963 kubeadm.go:319] 	--control-plane 
	I1126 20:50:18.678543  214963 kubeadm.go:319] 
	I1126 20:50:18.678645  214963 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:50:18.678694  214963 kubeadm.go:319] 
	I1126 20:50:18.678804  214963 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fhaqlq.94cikrh91bquxnf5 \
	I1126 20:50:18.678953  214963 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a 
	I1126 20:50:18.683217  214963 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1126 20:50:18.683446  214963 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1126 20:50:18.683556  214963 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 20:50:18.683579  214963 cni.go:84] Creating CNI manager for ""
	I1126 20:50:18.683590  214963 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:50:18.686815  214963 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1126 20:50:15.312459  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:17.312812  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:19.312925  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:50:18.689651  214963 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 20:50:18.696173  214963 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 20:50:18.696236  214963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 20:50:18.718569  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:50:19.483434  214963 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:50:19.483566  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:19.483647  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-616586 minikube.k8s.io/updated_at=2025_11_26T20_50_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=embed-certs-616586 minikube.k8s.io/primary=true
	I1126 20:50:19.639322  214963 ops.go:34] apiserver oom_adj: -16
	I1126 20:50:19.645164  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:20.145546  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:20.646111  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:21.145650  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:21.645817  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:22.145273  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:22.646137  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:23.145556  214963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:50:23.244879  214963 kubeadm.go:1114] duration metric: took 3.761355778s to wait for elevateKubeSystemPrivileges
	I1126 20:50:23.244915  214963 kubeadm.go:403] duration metric: took 21.484020828s to StartCluster
	I1126 20:50:23.244934  214963 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:23.245002  214963 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:50:23.246314  214963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:23.246566  214963 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:50:23.246676  214963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:50:23.246911  214963 config.go:182] Loaded profile config "embed-certs-616586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:50:23.246959  214963 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:50:23.247023  214963 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-616586"
	I1126 20:50:23.247038  214963 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-616586"
	I1126 20:50:23.247067  214963 host.go:66] Checking if "embed-certs-616586" exists ...
	I1126 20:50:23.247584  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:50:23.248216  214963 addons.go:70] Setting default-storageclass=true in profile "embed-certs-616586"
	I1126 20:50:23.248239  214963 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-616586"
	I1126 20:50:23.248513  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:50:23.250913  214963 out.go:179] * Verifying Kubernetes components...
	I1126 20:50:23.254937  214963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:50:23.282096  214963 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:50:23.284993  214963 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:50:23.285016  214963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:50:23.285081  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:50:23.293556  214963 addons.go:239] Setting addon default-storageclass=true in "embed-certs-616586"
	I1126 20:50:23.293594  214963 host.go:66] Checking if "embed-certs-616586" exists ...
	I1126 20:50:23.294309  214963 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:50:23.325768  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:50:23.339602  214963 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:50:23.339624  214963 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:50:23.339687  214963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:50:23.369979  214963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:50:23.639340  214963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:50:23.647547  214963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:50:23.650762  214963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:50:23.650878  214963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:50:24.602499  214963 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1126 20:50:24.603311  214963 node_ready.go:35] waiting up to 6m0s for node "embed-certs-616586" to be "Ready" ...
	I1126 20:50:24.656251  214963 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1126 20:50:21.812110  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:23.812216  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:50:24.659038  214963 addons.go:530] duration metric: took 1.412074022s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 20:50:25.107708  214963 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-616586" context rescaled to 1 replicas
	W1126 20:50:25.812754  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:28.312728  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	W1126 20:50:26.606318  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:28.607259  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:30.313004  212927 pod_ready.go:104] pod "coredns-66bc5c9577-4z56c" is not "Ready", error: <nil>
	I1126 20:50:32.313619  212927 pod_ready.go:94] pod "coredns-66bc5c9577-4z56c" is "Ready"
	I1126 20:50:32.313648  212927 pod_ready.go:86] duration metric: took 37.506488802s for pod "coredns-66bc5c9577-4z56c" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.316988  212927 pod_ready.go:83] waiting for pod "etcd-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.321819  212927 pod_ready.go:94] pod "etcd-no-preload-956694" is "Ready"
	I1126 20:50:32.321847  212927 pod_ready.go:86] duration metric: took 4.830935ms for pod "etcd-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.325248  212927 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.330179  212927 pod_ready.go:94] pod "kube-apiserver-no-preload-956694" is "Ready"
	I1126 20:50:32.330207  212927 pod_ready.go:86] duration metric: took 4.930017ms for pod "kube-apiserver-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.332576  212927 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.511770  212927 pod_ready.go:94] pod "kube-controller-manager-no-preload-956694" is "Ready"
	I1126 20:50:32.511807  212927 pod_ready.go:86] duration metric: took 179.202247ms for pod "kube-controller-manager-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:32.711624  212927 pod_ready.go:83] waiting for pod "kube-proxy-2j4dg" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:33.111211  212927 pod_ready.go:94] pod "kube-proxy-2j4dg" is "Ready"
	I1126 20:50:33.111240  212927 pod_ready.go:86] duration metric: took 399.589365ms for pod "kube-proxy-2j4dg" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:33.311566  212927 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:33.710664  212927 pod_ready.go:94] pod "kube-scheduler-no-preload-956694" is "Ready"
	I1126 20:50:33.710757  212927 pod_ready.go:86] duration metric: took 399.162584ms for pod "kube-scheduler-no-preload-956694" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:50:33.710788  212927 pod_ready.go:40] duration metric: took 38.907115315s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:50:33.768957  212927 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 20:50:33.773897  212927 out.go:179] * Done! kubectl is now configured to use "no-preload-956694" cluster and "default" namespace by default
	W1126 20:50:31.106298  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:33.606117  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:35.606207  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:37.606414  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:40.106509  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:42.107361  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:44.606475  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.271176855Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.274670829Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.274701663Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.274721913Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.27782897Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.277859648Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.277882104Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.281974223Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.282009348Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.28203402Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.285040411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:50:34 no-preload-956694 crio[657]: time="2025-11-26T20:50:34.285070794Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.743893914Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=213f6b28-4145-40a1-9743-90802784c8d8 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.745469481Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=65f7d17e-9d59-43ab-a045-20c23be603ce name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.746594362Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q/dashboard-metrics-scraper" id=477f6cc4-2f6c-4573-857f-01b93a85f53c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.746695422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.754475724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.755206582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.776682792Z" level=info msg="Created container 29aceaa82429db92b12b0fa7cd1c23589c67124c5ba0a8f019d64c3035e55cf4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q/dashboard-metrics-scraper" id=477f6cc4-2f6c-4573-857f-01b93a85f53c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.779291952Z" level=info msg="Starting container: 29aceaa82429db92b12b0fa7cd1c23589c67124c5ba0a8f019d64c3035e55cf4" id=d0a43b1f-c18c-4d03-b66c-7e858eed365b name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:50:41 no-preload-956694 crio[657]: time="2025-11-26T20:50:41.78136672Z" level=info msg="Started container" PID=1722 containerID=29aceaa82429db92b12b0fa7cd1c23589c67124c5ba0a8f019d64c3035e55cf4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q/dashboard-metrics-scraper id=d0a43b1f-c18c-4d03-b66c-7e858eed365b name=/runtime.v1.RuntimeService/StartContainer sandboxID=83512babed2bf644e30f537effb6871e91e1e8f5e6cb3c1dc4c996f755f23066
	Nov 26 20:50:41 no-preload-956694 conmon[1719]: conmon 29aceaa82429db92b12b <ninfo>: container 1722 exited with status 1
	Nov 26 20:50:42 no-preload-956694 crio[657]: time="2025-11-26T20:50:42.121646869Z" level=info msg="Removing container: e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83" id=29f0de8c-f378-44c4-845e-273d5c21bc02 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:50:42 no-preload-956694 crio[657]: time="2025-11-26T20:50:42.187846711Z" level=info msg="Error loading conmon cgroup of container e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83: cgroup deleted" id=29f0de8c-f378-44c4-845e-273d5c21bc02 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:50:42 no-preload-956694 crio[657]: time="2025-11-26T20:50:42.207667865Z" level=info msg="Removed container e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q/dashboard-metrics-scraper" id=29f0de8c-f378-44c4-845e-273d5c21bc02 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	29aceaa82429d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   83512babed2bf       dashboard-metrics-scraper-6ffb444bf9-jk74q   kubernetes-dashboard
	6a0c09bf8b235       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           26 seconds ago       Running             storage-provisioner         2                   1b1a416309b73       storage-provisioner                          kube-system
	ac76226123cfd       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   23c69956ad523       kubernetes-dashboard-855c9754f9-f79rr        kubernetes-dashboard
	0554e6955b891       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   67b102b9c3876       coredns-66bc5c9577-4z56c                     kube-system
	524f8264faaa1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   79ce7f2599553       busybox                                      default
	9367fa09811bc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   69ad0c5761aee       kindnet-dfdbx                                kube-system
	39dbe8551a738       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   bfa318e4f0fac       kube-proxy-2j4dg                             kube-system
	fe095a7725bd2       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           57 seconds ago       Exited              storage-provisioner         1                   1b1a416309b73       storage-provisioner                          kube-system
	64bf641df6328       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   88eb349d9ffe2       kube-scheduler-no-preload-956694             kube-system
	69bdaac7802d2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   8e16cc2ba0274       etcd-no-preload-956694                       kube-system
	166f0bf71ff63       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   5e1c7e20921d3       kube-apiserver-no-preload-956694             kube-system
	732f8dd674b2a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   b7ecdd70219df       kube-controller-manager-no-preload-956694    kube-system
	
	
	==> coredns [0554e6955b891b84949248d4dd7484a05d62ffe5fb5cc50417b0300d8db3c64e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55523 - 31262 "HINFO IN 8004151322408716222.6502235733554110869. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023671659s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-956694
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-956694
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=no-preload-956694
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_48_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:48:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-956694
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:50:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:50:23 +0000   Wed, 26 Nov 2025 20:48:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:50:23 +0000   Wed, 26 Nov 2025 20:48:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:50:23 +0000   Wed, 26 Nov 2025 20:48:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:50:23 +0000   Wed, 26 Nov 2025 20:49:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-956694
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                ca0edc11-ec05-4f09-ac60-84d8767e18da
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-4z56c                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 etcd-no-preload-956694                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-dfdbx                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-no-preload-956694              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-956694     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-2j4dg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-no-preload-956694              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-jk74q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f79rr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 114s                 kube-proxy       
	  Normal   Starting                 56s                  kube-proxy       
	  Warning  CgroupV1                 2m7s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node no-preload-956694 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node no-preload-956694 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node no-preload-956694 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m                   kubelet          Node no-preload-956694 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m                   kubelet          Node no-preload-956694 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m                   kubelet          Node no-preload-956694 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           116s                 node-controller  Node no-preload-956694 event: Registered Node no-preload-956694 in Controller
	  Normal   NodeReady                102s                 kubelet          Node no-preload-956694 status is now: NodeReady
	  Normal   Starting                 67s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)    kubelet          Node no-preload-956694 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)    kubelet          Node no-preload-956694 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)    kubelet          Node no-preload-956694 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                  node-controller  Node no-preload-956694 event: Registered Node no-preload-956694 in Controller
	
	
	==> dmesg <==
	[Nov26 20:23] overlayfs: idmapped layers are currently not supported
	[Nov26 20:24] overlayfs: idmapped layers are currently not supported
	[Nov26 20:25] overlayfs: idmapped layers are currently not supported
	[Nov26 20:27] overlayfs: idmapped layers are currently not supported
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	[Nov26 20:48] overlayfs: idmapped layers are currently not supported
	[Nov26 20:49] overlayfs: idmapped layers are currently not supported
	[Nov26 20:50] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [69bdaac7802d27e42ed29500b6c8549fd05c61287e8c9653748bb2accdeae2e1] <==
	{"level":"warn","ts":"2025-11-26T20:49:50.224418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.258130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.284997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.322099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.372177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.399086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.428996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.464864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.475077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.522104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.528749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.551183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.572719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.597641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.610601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.637174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.655256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.672603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.689490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.714396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.736139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.775134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.830350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.855061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:49:50.969565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43488","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:50:50 up  1:33,  0 user,  load average: 3.47, 3.16, 2.51
	Linux no-preload-956694 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9367fa09811bc7824710f65db213810a28f4a5b2e9e228aec215eff41118f2d9] <==
	I1126 20:49:54.129127       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:49:54.129339       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:49:54.129447       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:49:54.129458       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:49:54.129468       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:49:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:49:54.328498       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:49:54.328525       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:49:54.328534       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:49:54.328842       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:50:24.261296       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:50:24.328968       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:50:24.329161       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:50:24.329283       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1126 20:50:25.828698       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:50:25.828734       1 metrics.go:72] Registering metrics
	I1126 20:50:25.828784       1 controller.go:711] "Syncing nftables rules"
	I1126 20:50:34.262255       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:50:34.262312       1 main.go:301] handling current node
	I1126 20:50:44.261909       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:50:44.261970       1 main.go:301] handling current node
	
	
	==> kube-apiserver [166f0bf71ff637391d7021779d2e2a5d27dea53b2e94af5da7c6556cf939eefc] <==
	I1126 20:49:52.350745       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1126 20:49:52.354972       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1126 20:49:52.355046       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:49:52.355319       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:49:52.378217       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1126 20:49:52.378266       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1126 20:49:52.379092       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:49:52.401157       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1126 20:49:52.429024       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:49:52.438840       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:49:52.438866       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:49:52.438873       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:49:52.438893       1 cache.go:39] Caches are synced for autoregister controller
	E1126 20:49:52.615777       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:49:52.971753       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:49:53.137502       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:49:54.472312       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:49:54.514871       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:49:54.559304       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:49:54.575817       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:49:54.665907       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.90.118"}
	I1126 20:49:54.707837       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.16.56"}
	I1126 20:49:56.864257       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:49:56.966871       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:49:57.017165       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [732f8dd674b2a79542d6b5db5ae656af930d6da79a225d1e0dbcfdec933c1b97] <==
	I1126 20:49:56.432881       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:49:56.434167       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:49:56.436302       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:49:56.437236       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:49:56.437317       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:49:56.445229       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 20:49:56.445254       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:49:56.447530       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:49:56.445269       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 20:49:56.451483       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1126 20:49:56.453510       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1126 20:49:56.456323       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:49:56.456501       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 20:49:56.457316       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:49:56.457332       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:49:56.459681       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:49:56.458913       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 20:49:56.464948       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1126 20:49:56.466158       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:49:56.466805       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 20:49:56.474757       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:49:56.506665       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:49:56.506757       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:49:56.506789       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:49:56.531747       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [39dbe8551a73859abffe10915c3f3e6c1fd1869e9b974e6953b486b1a5d2578d] <==
	I1126 20:49:54.204125       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:49:54.343054       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:49:54.452697       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:49:54.452735       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1126 20:49:54.452839       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:49:54.528615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:49:54.528667       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:49:54.562167       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:49:54.562755       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:49:54.563008       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:49:54.565239       1 config.go:200] "Starting service config controller"
	I1126 20:49:54.570344       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:49:54.573696       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:49:54.576810       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:49:54.576973       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:49:54.577017       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:49:54.586867       1 config.go:309] "Starting node config controller"
	I1126 20:49:54.591100       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:49:54.591406       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:49:54.675008       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:49:54.677275       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:49:54.677314       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [64bf641df6328d766e26a8b3d40eb3a629a1b6d5034073ad5e5eacc3049b071b] <==
	I1126 20:49:50.318713       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:49:52.205122       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:49:52.205152       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:49:52.205162       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:49:52.205180       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:49:52.591225       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:49:52.591253       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:49:52.608888       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:49:52.609568       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:49:52.612058       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:49:52.612351       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:49:52.710097       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:49:57 no-preload-956694 kubelet[778]: W1126 20:49:57.535913     778 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53e8b694caf6dd34a274927bf19136786ad1454bc00d9527b50cd4d3b517c78b/crio-23c69956ad523eff3ccfa7b73af3797ce1495572840d5f3cec267618b8a0f42e WatchSource:0}: Error finding container 23c69956ad523eff3ccfa7b73af3797ce1495572840d5f3cec267618b8a0f42e: Status 404 returned error can't find the container with id 23c69956ad523eff3ccfa7b73af3797ce1495572840d5f3cec267618b8a0f42e
	Nov 26 20:50:02 no-preload-956694 kubelet[778]: I1126 20:50:02.159054     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 26 20:50:03 no-preload-956694 kubelet[778]: I1126 20:50:03.978431     778 scope.go:117] "RemoveContainer" containerID="a7666dc44e48e4adc5ca904c88567f71c5adf39895c805f965f37a15c6fc8c9d"
	Nov 26 20:50:04 no-preload-956694 kubelet[778]: I1126 20:50:04.984789     778 scope.go:117] "RemoveContainer" containerID="a7666dc44e48e4adc5ca904c88567f71c5adf39895c805f965f37a15c6fc8c9d"
	Nov 26 20:50:04 no-preload-956694 kubelet[778]: I1126 20:50:04.985103     778 scope.go:117] "RemoveContainer" containerID="3e5d7bc87da7fc300c53d05db4e764ac484d78646476776b98cc3b1fb9e9361b"
	Nov 26 20:50:04 no-preload-956694 kubelet[778]: E1126 20:50:04.985268     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jk74q_kubernetes-dashboard(f5569895-2ab9-4e89-af25-c9702a514f87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q" podUID="f5569895-2ab9-4e89-af25-c9702a514f87"
	Nov 26 20:50:05 no-preload-956694 kubelet[778]: I1126 20:50:05.989799     778 scope.go:117] "RemoveContainer" containerID="3e5d7bc87da7fc300c53d05db4e764ac484d78646476776b98cc3b1fb9e9361b"
	Nov 26 20:50:05 no-preload-956694 kubelet[778]: E1126 20:50:05.994865     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jk74q_kubernetes-dashboard(f5569895-2ab9-4e89-af25-c9702a514f87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q" podUID="f5569895-2ab9-4e89-af25-c9702a514f87"
	Nov 26 20:50:07 no-preload-956694 kubelet[778]: I1126 20:50:07.483311     778 scope.go:117] "RemoveContainer" containerID="3e5d7bc87da7fc300c53d05db4e764ac484d78646476776b98cc3b1fb9e9361b"
	Nov 26 20:50:07 no-preload-956694 kubelet[778]: E1126 20:50:07.483482     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jk74q_kubernetes-dashboard(f5569895-2ab9-4e89-af25-c9702a514f87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q" podUID="f5569895-2ab9-4e89-af25-c9702a514f87"
	Nov 26 20:50:20 no-preload-956694 kubelet[778]: I1126 20:50:20.743306     778 scope.go:117] "RemoveContainer" containerID="3e5d7bc87da7fc300c53d05db4e764ac484d78646476776b98cc3b1fb9e9361b"
	Nov 26 20:50:21 no-preload-956694 kubelet[778]: I1126 20:50:21.035569     778 scope.go:117] "RemoveContainer" containerID="3e5d7bc87da7fc300c53d05db4e764ac484d78646476776b98cc3b1fb9e9361b"
	Nov 26 20:50:21 no-preload-956694 kubelet[778]: I1126 20:50:21.035867     778 scope.go:117] "RemoveContainer" containerID="e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83"
	Nov 26 20:50:21 no-preload-956694 kubelet[778]: E1126 20:50:21.036064     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jk74q_kubernetes-dashboard(f5569895-2ab9-4e89-af25-c9702a514f87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q" podUID="f5569895-2ab9-4e89-af25-c9702a514f87"
	Nov 26 20:50:21 no-preload-956694 kubelet[778]: I1126 20:50:21.060553     778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f79rr" podStartSLOduration=11.366549392 podStartE2EDuration="24.060465986s" podCreationTimestamp="2025-11-26 20:49:57 +0000 UTC" firstStartedPulling="2025-11-26 20:49:57.543592816 +0000 UTC m=+14.054369669" lastFinishedPulling="2025-11-26 20:50:10.237509418 +0000 UTC m=+26.748286263" observedRunningTime="2025-11-26 20:50:11.02261951 +0000 UTC m=+27.533396372" watchObservedRunningTime="2025-11-26 20:50:21.060465986 +0000 UTC m=+37.571242839"
	Nov 26 20:50:24 no-preload-956694 kubelet[778]: I1126 20:50:24.047507     778 scope.go:117] "RemoveContainer" containerID="fe095a7725bd274ab36ace78665c689e31b9870d45c3f58f42466f2b19ca1bac"
	Nov 26 20:50:27 no-preload-956694 kubelet[778]: I1126 20:50:27.483757     778 scope.go:117] "RemoveContainer" containerID="e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83"
	Nov 26 20:50:27 no-preload-956694 kubelet[778]: E1126 20:50:27.484424     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jk74q_kubernetes-dashboard(f5569895-2ab9-4e89-af25-c9702a514f87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q" podUID="f5569895-2ab9-4e89-af25-c9702a514f87"
	Nov 26 20:50:41 no-preload-956694 kubelet[778]: I1126 20:50:41.743150     778 scope.go:117] "RemoveContainer" containerID="e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83"
	Nov 26 20:50:42 no-preload-956694 kubelet[778]: I1126 20:50:42.112113     778 scope.go:117] "RemoveContainer" containerID="e08b706dc5c3f98f3da0528c6ab01440948c6e5c733fab3f7d96f60284b98d83"
	Nov 26 20:50:42 no-preload-956694 kubelet[778]: I1126 20:50:42.116851     778 scope.go:117] "RemoveContainer" containerID="29aceaa82429db92b12b0fa7cd1c23589c67124c5ba0a8f019d64c3035e55cf4"
	Nov 26 20:50:42 no-preload-956694 kubelet[778]: E1126 20:50:42.117235     778 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-jk74q_kubernetes-dashboard(f5569895-2ab9-4e89-af25-c9702a514f87)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-jk74q" podUID="f5569895-2ab9-4e89-af25-c9702a514f87"
	Nov 26 20:50:46 no-preload-956694 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:50:46 no-preload-956694 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:50:46 no-preload-956694 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ac76226123cfd439ff139dd22351505e7daee346df3e1e19dcb7f7d973283462] <==
	2025/11/26 20:50:10 Starting overwatch
	2025/11/26 20:50:10 Using namespace: kubernetes-dashboard
	2025/11/26 20:50:10 Using in-cluster config to connect to apiserver
	2025/11/26 20:50:10 Using secret token for csrf signing
	2025/11/26 20:50:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:50:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:50:10 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 20:50:10 Generating JWE encryption key
	2025/11/26 20:50:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:50:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:50:10 Initializing JWE encryption key from synchronized object
	2025/11/26 20:50:10 Creating in-cluster Sidecar client
	2025/11/26 20:50:10 Serving insecurely on HTTP port: 9090
	2025/11/26 20:50:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:50:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6a0c09bf8b235fc0d759a84c8b8fdceafe61508be112ec1cf5b51a0d6b389fa7] <==
	I1126 20:50:24.154055       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:50:24.193371       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:50:24.193482       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:50:24.198207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:27.653354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:31.913987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:35.512644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:38.566211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:41.588204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:41.593337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:50:41.593581       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:50:41.593781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-956694_39ef9c9d-7e87-4b0e-850f-3286c711d3bb!
	I1126 20:50:41.594306       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa3c6d99-6069-4dc6-b561-d2344160065e", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-956694_39ef9c9d-7e87-4b0e-850f-3286c711d3bb became leader
	W1126 20:50:41.598316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:41.607748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:50:41.694621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-956694_39ef9c9d-7e87-4b0e-850f-3286c711d3bb!
	W1126 20:50:43.611325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:43.618340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:45.623938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:45.630860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:47.635067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:47.642982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:49.645431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:50:49.651038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fe095a7725bd274ab36ace78665c689e31b9870d45c3f58f42466f2b19ca1bac] <==
	I1126 20:49:53.734009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:50:23.739386       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-956694 -n no-preload-956694
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-956694 -n no-preload-956694: exit status 2 (377.225653ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-956694 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-616586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-616586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.974149ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:51:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-616586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-616586 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-616586 describe deploy/metrics-server -n kube-system: exit status 1 (97.542412ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-616586 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-616586
helpers_test.go:243: (dbg) docker inspect embed-certs-616586:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d",
	        "Created": "2025-11-26T20:49:51.803939719Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 215365,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:49:51.872990122Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/hosts",
	        "LogPath": "/var/lib/docker/containers/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d-json.log",
	        "Name": "/embed-certs-616586",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-616586:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-616586",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d",
	                "LowerDir": "/var/lib/docker/overlay2/ee40ec00c8e4f4c52d4005a57d1bc8fa1807a5f08ea65960ca2b855ee1aee036-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ee40ec00c8e4f4c52d4005a57d1bc8fa1807a5f08ea65960ca2b855ee1aee036/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ee40ec00c8e4f4c52d4005a57d1bc8fa1807a5f08ea65960ca2b855ee1aee036/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ee40ec00c8e4f4c52d4005a57d1bc8fa1807a5f08ea65960ca2b855ee1aee036/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-616586",
	                "Source": "/var/lib/docker/volumes/embed-certs-616586/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-616586",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-616586",
	                "name.minikube.sigs.k8s.io": "embed-certs-616586",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39fe401c64c8fe6f96e47646b527274e960582f60911ef736227869901bdda78",
	            "SandboxKey": "/var/run/docker/netns/39fe401c64c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-616586": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:af:b7:f7:09:68",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e81bfab46f3df2dcaf4383ddbd73f7ed61981d9755f2d4e0122a1a2df6affbf8",
	                    "EndpointID": "778b6c1645fe36db15fb0b1c1496a660c186a20b323affd2ef0621ae1cd716b5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-616586",
	                        "76154eec8a12"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-616586 -n embed-certs-616586
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-616586 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-616586 logs -n 25: (1.804286326s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-207115                                                                                                                                                                                                                        │ cert-options-207115          │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:45 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:45 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-264537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │                     │
	│ stop    │ -p old-k8s-version-264537 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-264537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:47 UTC │
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164741       │ jenkins │ v1.37.0 │ 26 Nov 25 20:47 UTC │ 26 Nov 25 20:49 UTC │
	│ image   │ old-k8s-version-264537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ pause   │ -p old-k8s-version-264537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │                     │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                                                                                     │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                                                                                     │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable metrics-server -p no-preload-956694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │                     │
	│ stop    │ -p no-preload-956694 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable dashboard -p no-preload-956694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p cert-expiration-164741                                                                                                                                                                                                                     │ cert-expiration-164741       │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:51 UTC │
	│ image   │ no-preload-956694 image list --format=json                                                                                                                                                                                                    │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ pause   │ -p no-preload-956694 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │                     │
	│ delete  │ -p no-preload-956694                                                                                                                                                                                                                          │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p no-preload-956694                                                                                                                                                                                                                          │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p disable-driver-mounts-180932                                                                                                                                                                                                               │ disable-driver-mounts-180932 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-616586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:50:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:50:55.210239  219464 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:50:55.210430  219464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:50:55.210444  219464 out.go:374] Setting ErrFile to fd 2...
	I1126 20:50:55.210450  219464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:50:55.210746  219464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:50:55.211193  219464 out.go:368] Setting JSON to false
	I1126 20:50:55.212227  219464 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5586,"bootTime":1764184670,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:50:55.212290  219464 start.go:143] virtualization:  
	I1126 20:50:55.216415  219464 out.go:179] * [default-k8s-diff-port-538119] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:50:55.220829  219464 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:50:55.220845  219464 notify.go:221] Checking for updates...
	I1126 20:50:55.224102  219464 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:50:55.227300  219464 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:50:55.230407  219464 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:50:55.233508  219464 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:50:55.236576  219464 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:50:55.240354  219464 config.go:182] Loaded profile config "embed-certs-616586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:50:55.240562  219464 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:50:55.277903  219464 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:50:55.278052  219464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:50:55.339033  219464 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:50:55.329426517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:50:55.339136  219464 docker.go:319] overlay module found
	I1126 20:50:55.342289  219464 out.go:179] * Using the docker driver based on user configuration
	I1126 20:50:55.345203  219464 start.go:309] selected driver: docker
	I1126 20:50:55.345223  219464 start.go:927] validating driver "docker" against <nil>
	I1126 20:50:55.345236  219464 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:50:55.346128  219464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:50:55.406782  219464 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:50:55.397462244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:50:55.406952  219464 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 20:50:55.407179  219464 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:50:55.410106  219464 out.go:179] * Using Docker driver with root privileges
	I1126 20:50:55.412843  219464 cni.go:84] Creating CNI manager for ""
	I1126 20:50:55.412906  219464 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:50:55.412920  219464 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 20:50:55.413004  219464 start.go:353] cluster config:
	{Name:default-k8s-diff-port-538119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:50:55.418058  219464 out.go:179] * Starting "default-k8s-diff-port-538119" primary control-plane node in "default-k8s-diff-port-538119" cluster
	I1126 20:50:55.420874  219464 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:50:55.423871  219464 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:50:55.426812  219464 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:50:55.426858  219464 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:50:55.426871  219464 cache.go:65] Caching tarball of preloaded images
	I1126 20:50:55.426886  219464 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:50:55.426967  219464 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:50:55.426976  219464 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:50:55.427077  219464 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/config.json ...
	I1126 20:50:55.427094  219464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/config.json: {Name:mkd672c2fa57544022b01c97546b3d6e81538d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:50:55.446270  219464 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:50:55.446297  219464 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:50:55.446315  219464 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:50:55.446349  219464 start.go:360] acquireMachinesLock for default-k8s-diff-port-538119: {Name:mkdef3fabf2e513d8e713b1948a2979a9bdfa526 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:50:55.446454  219464 start.go:364] duration metric: took 85.38µs to acquireMachinesLock for "default-k8s-diff-port-538119"
	I1126 20:50:55.446483  219464 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-538119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538119 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:50:55.446550  219464 start.go:125] createHost starting for "" (driver="docker")
	W1126 20:50:51.606331  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:54.106682  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	I1126 20:50:55.450000  219464 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:50:55.450248  219464 start.go:159] libmachine.API.Create for "default-k8s-diff-port-538119" (driver="docker")
	I1126 20:50:55.450291  219464 client.go:173] LocalClient.Create starting
	I1126 20:50:55.450360  219464 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem
	I1126 20:50:55.450419  219464 main.go:143] libmachine: Decoding PEM data...
	I1126 20:50:55.450442  219464 main.go:143] libmachine: Parsing certificate...
	I1126 20:50:55.450500  219464 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem
	I1126 20:50:55.450532  219464 main.go:143] libmachine: Decoding PEM data...
	I1126 20:50:55.450544  219464 main.go:143] libmachine: Parsing certificate...
	I1126 20:50:55.450956  219464 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:50:55.466764  219464 cli_runner.go:211] docker network inspect default-k8s-diff-port-538119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:50:55.466856  219464 network_create.go:284] running [docker network inspect default-k8s-diff-port-538119] to gather additional debugging logs...
	I1126 20:50:55.466882  219464 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538119
	W1126 20:50:55.482676  219464 cli_runner.go:211] docker network inspect default-k8s-diff-port-538119 returned with exit code 1
	I1126 20:50:55.482709  219464 network_create.go:287] error running [docker network inspect default-k8s-diff-port-538119]: docker network inspect default-k8s-diff-port-538119: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-538119 not found
	I1126 20:50:55.482724  219464 network_create.go:289] output of [docker network inspect default-k8s-diff-port-538119]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-538119 not found
	
	** /stderr **
	I1126 20:50:55.482816  219464 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:50:55.499165  219464 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20cb65a83ad5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:26:47:2b:2e:03} reservation:<nil>}
	I1126 20:50:55.499503  219464 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-16105a7ff776 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:75:f6:9d:ad:ac} reservation:<nil>}
	I1126 20:50:55.499833  219464 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f1c69ea9dfa3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b7:bf:8a:44:80} reservation:<nil>}
	I1126 20:50:55.500224  219464 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1d290}
	I1126 20:50:55.500246  219464 network_create.go:124] attempt to create docker network default-k8s-diff-port-538119 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1126 20:50:55.500303  219464 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-538119 default-k8s-diff-port-538119
	I1126 20:50:55.564510  219464 network_create.go:108] docker network default-k8s-diff-port-538119 192.168.76.0/24 created
	I1126 20:50:55.564543  219464 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-538119" container
	I1126 20:50:55.564613  219464 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:50:55.584466  219464 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-538119 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-538119 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:50:55.603973  219464 oci.go:103] Successfully created a docker volume default-k8s-diff-port-538119
	I1126 20:50:55.604076  219464 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-538119-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-538119 --entrypoint /usr/bin/test -v default-k8s-diff-port-538119:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:50:56.143652  219464 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-538119
	I1126 20:50:56.143712  219464 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:50:56.143722  219464 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:50:56.143790  219464 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-538119:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	W1126 20:50:56.107082  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:50:58.607069  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	I1126 20:51:00.567736  219464 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-538119:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (4.423905894s)
	I1126 20:51:00.567765  219464 kic.go:203] duration metric: took 4.424038895s to extract preloaded images to volume ...
	W1126 20:51:00.567901  219464 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1126 20:51:00.568015  219464 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:51:00.626607  219464 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-538119 --name default-k8s-diff-port-538119 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-538119 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-538119 --network default-k8s-diff-port-538119 --ip 192.168.76.2 --volume default-k8s-diff-port-538119:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:51:00.916966  219464 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538119 --format={{.State.Running}}
	I1126 20:51:00.936010  219464 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538119 --format={{.State.Status}}
	I1126 20:51:00.964415  219464 cli_runner.go:164] Run: docker exec default-k8s-diff-port-538119 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:51:01.020348  219464 oci.go:144] the created container "default-k8s-diff-port-538119" has a running status.
	I1126 20:51:01.020379  219464 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa...
	I1126 20:51:01.248682  219464 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:51:01.281118  219464 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538119 --format={{.State.Status}}
	I1126 20:51:01.310829  219464 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:51:01.310854  219464 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-538119 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:51:01.376022  219464 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538119 --format={{.State.Status}}
	I1126 20:51:01.398017  219464 machine.go:94] provisionDockerMachine start ...
	I1126 20:51:01.398179  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:01.420703  219464 main.go:143] libmachine: Using SSH client type: native
	I1126 20:51:01.421214  219464 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1126 20:51:01.421232  219464 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:51:01.421883  219464 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50404->127.0.0.1:33068: read: connection reset by peer
	I1126 20:51:04.573406  219464 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538119
	
	I1126 20:51:04.573431  219464 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-538119"
	I1126 20:51:04.573509  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:04.591320  219464 main.go:143] libmachine: Using SSH client type: native
	I1126 20:51:04.591640  219464 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1126 20:51:04.591659  219464 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-538119 && echo "default-k8s-diff-port-538119" | sudo tee /etc/hostname
	I1126 20:51:04.751933  219464 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-538119
	
	I1126 20:51:04.752023  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:04.770666  219464 main.go:143] libmachine: Using SSH client type: native
	I1126 20:51:04.770979  219464 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1126 20:51:04.771003  219464 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-538119' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-538119/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-538119' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:51:04.934176  219464 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:51:04.934205  219464 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:51:04.934235  219464 ubuntu.go:190] setting up certificates
	I1126 20:51:04.934249  219464 provision.go:84] configureAuth start
	I1126 20:51:04.934324  219464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538119
	I1126 20:51:04.951536  219464 provision.go:143] copyHostCerts
	I1126 20:51:04.951601  219464 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:51:04.951610  219464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:51:04.951687  219464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:51:04.951809  219464 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:51:04.951827  219464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:51:04.951862  219464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:51:04.951937  219464 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:51:04.951949  219464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:51:04.951982  219464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:51:04.952045  219464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-538119 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-538119 localhost minikube]
	I1126 20:51:05.185525  219464 provision.go:177] copyRemoteCerts
	I1126 20:51:05.185597  219464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:51:05.185649  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:05.206734  219464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	W1126 20:51:01.106897  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	W1126 20:51:03.605975  214963 node_ready.go:57] node "embed-certs-616586" has "Ready":"False" status (will retry)
	I1126 20:51:05.107737  214963 node_ready.go:49] node "embed-certs-616586" is "Ready"
	I1126 20:51:05.107773  214963 node_ready.go:38] duration metric: took 40.504440685s for node "embed-certs-616586" to be "Ready" ...
	I1126 20:51:05.107795  214963 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:51:05.107861  214963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:51:05.133098  214963 api_server.go:72] duration metric: took 41.886494806s to wait for apiserver process to appear ...
	I1126 20:51:05.133121  214963 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:51:05.133142  214963 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:51:05.142357  214963 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1126 20:51:05.148282  214963 api_server.go:141] control plane version: v1.34.1
	I1126 20:51:05.148310  214963 api_server.go:131] duration metric: took 15.181331ms to wait for apiserver health ...
	I1126 20:51:05.148320  214963 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:51:05.154511  214963 system_pods.go:59] 8 kube-system pods found
	I1126 20:51:05.154556  214963 system_pods.go:61] "coredns-66bc5c9577-lmmqs" [8b9cb74e-e5f6-413d-918a-66872e539adf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:51:05.154566  214963 system_pods.go:61] "etcd-embed-certs-616586" [2379b064-da28-43a0-b71d-4a9803da3169] Running
	I1126 20:51:05.154572  214963 system_pods.go:61] "kindnet-5zbx9" [d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556] Running
	I1126 20:51:05.154576  214963 system_pods.go:61] "kube-apiserver-embed-certs-616586" [6e697b4a-2458-4ef6-8c72-8c8272b80d6e] Running
	I1126 20:51:05.154581  214963 system_pods.go:61] "kube-controller-manager-embed-certs-616586" [a0385efe-91d4-40ed-b76c-be281d7ed831] Running
	I1126 20:51:05.154584  214963 system_pods.go:61] "kube-proxy-g5vk4" [711e6b5c-eac4-4b0c-9a50-22ddb3b73c53] Running
	I1126 20:51:05.154588  214963 system_pods.go:61] "kube-scheduler-embed-certs-616586" [08620aaf-720f-4514-b73f-6eb433363368] Running
	I1126 20:51:05.154593  214963 system_pods.go:61] "storage-provisioner" [ceee294c-4db0-4dc0-888c-e3733a2592cb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:51:05.154600  214963 system_pods.go:74] duration metric: took 6.274094ms to wait for pod list to return data ...
	I1126 20:51:05.154609  214963 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:51:05.173911  214963 default_sa.go:45] found service account: "default"
	I1126 20:51:05.174022  214963 default_sa.go:55] duration metric: took 19.406034ms for default service account to be created ...
	I1126 20:51:05.174070  214963 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:51:05.180950  214963 system_pods.go:86] 8 kube-system pods found
	I1126 20:51:05.181039  214963 system_pods.go:89] "coredns-66bc5c9577-lmmqs" [8b9cb74e-e5f6-413d-918a-66872e539adf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:51:05.181062  214963 system_pods.go:89] "etcd-embed-certs-616586" [2379b064-da28-43a0-b71d-4a9803da3169] Running
	I1126 20:51:05.181105  214963 system_pods.go:89] "kindnet-5zbx9" [d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556] Running
	I1126 20:51:05.181136  214963 system_pods.go:89] "kube-apiserver-embed-certs-616586" [6e697b4a-2458-4ef6-8c72-8c8272b80d6e] Running
	I1126 20:51:05.181167  214963 system_pods.go:89] "kube-controller-manager-embed-certs-616586" [a0385efe-91d4-40ed-b76c-be281d7ed831] Running
	I1126 20:51:05.181216  214963 system_pods.go:89] "kube-proxy-g5vk4" [711e6b5c-eac4-4b0c-9a50-22ddb3b73c53] Running
	I1126 20:51:05.181250  214963 system_pods.go:89] "kube-scheduler-embed-certs-616586" [08620aaf-720f-4514-b73f-6eb433363368] Running
	I1126 20:51:05.181295  214963 system_pods.go:89] "storage-provisioner" [ceee294c-4db0-4dc0-888c-e3733a2592cb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:51:05.181336  214963 retry.go:31] will retry after 260.462487ms: missing components: kube-dns
	I1126 20:51:05.450513  214963 system_pods.go:86] 8 kube-system pods found
	I1126 20:51:05.450554  214963 system_pods.go:89] "coredns-66bc5c9577-lmmqs" [8b9cb74e-e5f6-413d-918a-66872e539adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:51:05.450562  214963 system_pods.go:89] "etcd-embed-certs-616586" [2379b064-da28-43a0-b71d-4a9803da3169] Running
	I1126 20:51:05.450569  214963 system_pods.go:89] "kindnet-5zbx9" [d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556] Running
	I1126 20:51:05.450573  214963 system_pods.go:89] "kube-apiserver-embed-certs-616586" [6e697b4a-2458-4ef6-8c72-8c8272b80d6e] Running
	I1126 20:51:05.450580  214963 system_pods.go:89] "kube-controller-manager-embed-certs-616586" [a0385efe-91d4-40ed-b76c-be281d7ed831] Running
	I1126 20:51:05.450584  214963 system_pods.go:89] "kube-proxy-g5vk4" [711e6b5c-eac4-4b0c-9a50-22ddb3b73c53] Running
	I1126 20:51:05.450588  214963 system_pods.go:89] "kube-scheduler-embed-certs-616586" [08620aaf-720f-4514-b73f-6eb433363368] Running
	I1126 20:51:05.450591  214963 system_pods.go:89] "storage-provisioner" [ceee294c-4db0-4dc0-888c-e3733a2592cb] Running
	I1126 20:51:05.450599  214963 system_pods.go:126] duration metric: took 276.509094ms to wait for k8s-apps to be running ...
	I1126 20:51:05.450606  214963 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:51:05.450657  214963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:51:05.469426  214963 system_svc.go:56] duration metric: took 18.811139ms WaitForService to wait for kubelet
	I1126 20:51:05.469463  214963 kubeadm.go:587] duration metric: took 42.222864018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:51:05.469482  214963 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:51:05.474272  214963 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:51:05.474312  214963 node_conditions.go:123] node cpu capacity is 2
	I1126 20:51:05.474332  214963 node_conditions.go:105] duration metric: took 4.842497ms to run NodePressure ...
	I1126 20:51:05.474346  214963 start.go:242] waiting for startup goroutines ...
	I1126 20:51:05.474353  214963 start.go:247] waiting for cluster config update ...
	I1126 20:51:05.474370  214963 start.go:256] writing updated cluster config ...
	I1126 20:51:05.474703  214963 ssh_runner.go:195] Run: rm -f paused
	I1126 20:51:05.488764  214963 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:51:05.547491  214963 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lmmqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:51:05.330526  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:51:05.359795  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1126 20:51:05.385813  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:51:05.411803  219464 provision.go:87] duration metric: took 477.530088ms to configureAuth
	I1126 20:51:05.411915  219464 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:51:05.412148  219464 config.go:182] Loaded profile config "default-k8s-diff-port-538119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:51:05.412298  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:05.451040  219464 main.go:143] libmachine: Using SSH client type: native
	I1126 20:51:05.451368  219464 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1126 20:51:05.451382  219464 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:51:05.865765  219464 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:51:05.865791  219464 machine.go:97] duration metric: took 4.467755308s to provisionDockerMachine
	I1126 20:51:05.865803  219464 client.go:176] duration metric: took 10.415504868s to LocalClient.Create
	I1126 20:51:05.865818  219464 start.go:167] duration metric: took 10.415572254s to libmachine.API.Create "default-k8s-diff-port-538119"
	I1126 20:51:05.865829  219464 start.go:293] postStartSetup for "default-k8s-diff-port-538119" (driver="docker")
	I1126 20:51:05.865840  219464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:51:05.865956  219464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:51:05.866004  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:05.883355  219464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:51:05.985967  219464 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:51:05.989155  219464 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:51:05.989188  219464 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:51:05.989202  219464 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:51:05.989272  219464 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:51:05.989373  219464 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:51:05.989501  219464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:51:05.997021  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:51:06.017770  219464 start.go:296] duration metric: took 151.925051ms for postStartSetup
	I1126 20:51:06.018216  219464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538119
	I1126 20:51:06.035752  219464 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/config.json ...
	I1126 20:51:06.036058  219464 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:51:06.036113  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:06.057271  219464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:51:06.159110  219464 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:51:06.163937  219464 start.go:128] duration metric: took 10.717373795s to createHost
	I1126 20:51:06.163963  219464 start.go:83] releasing machines lock for "default-k8s-diff-port-538119", held for 10.717495162s
	I1126 20:51:06.164035  219464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538119
	I1126 20:51:06.183829  219464 ssh_runner.go:195] Run: cat /version.json
	I1126 20:51:06.183842  219464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:51:06.183884  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:06.183898  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:06.206387  219464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:51:06.219610  219464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:51:06.440948  219464 ssh_runner.go:195] Run: systemctl --version
	I1126 20:51:06.447468  219464 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:51:06.483173  219464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:51:06.487319  219464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:51:06.487387  219464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:51:06.520339  219464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1126 20:51:06.520360  219464 start.go:496] detecting cgroup driver to use...
	I1126 20:51:06.520390  219464 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:51:06.520435  219464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:51:06.539711  219464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:51:06.556033  219464 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:51:06.556110  219464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:51:06.580702  219464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:51:06.599008  219464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:51:06.718743  219464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:51:06.843427  219464 docker.go:234] disabling docker service ...
	I1126 20:51:06.843495  219464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:51:06.865551  219464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:51:06.879426  219464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:51:06.994397  219464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:51:07.120574  219464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:51:07.133682  219464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:51:07.158197  219464 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:51:07.158262  219464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:07.172285  219464 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:51:07.172374  219464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:07.182321  219464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:07.190909  219464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:07.199826  219464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:51:07.208705  219464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:07.217598  219464 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:07.234436  219464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:07.243770  219464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:51:07.252463  219464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:51:07.259767  219464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:51:07.376386  219464 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:51:07.544818  219464 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:51:07.544893  219464 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:51:07.550261  219464 start.go:564] Will wait 60s for crictl version
	I1126 20:51:07.550335  219464 ssh_runner.go:195] Run: which crictl
	I1126 20:51:07.554830  219464 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:51:07.580597  219464 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:51:07.580680  219464 ssh_runner.go:195] Run: crio --version
	I1126 20:51:07.611770  219464 ssh_runner.go:195] Run: crio --version
	I1126 20:51:07.650657  219464 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:51:06.553444  214963 pod_ready.go:94] pod "coredns-66bc5c9577-lmmqs" is "Ready"
	I1126 20:51:06.553473  214963 pod_ready.go:86] duration metric: took 1.005955975s for pod "coredns-66bc5c9577-lmmqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:51:06.557148  214963 pod_ready.go:83] waiting for pod "etcd-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:51:06.561863  214963 pod_ready.go:94] pod "etcd-embed-certs-616586" is "Ready"
	I1126 20:51:06.561890  214963 pod_ready.go:86] duration metric: took 4.718833ms for pod "etcd-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:51:06.564210  214963 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:51:06.569021  214963 pod_ready.go:94] pod "kube-apiserver-embed-certs-616586" is "Ready"
	I1126 20:51:06.569049  214963 pod_ready.go:86] duration metric: took 4.816446ms for pod "kube-apiserver-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:51:06.571026  214963 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:51:06.751145  214963 pod_ready.go:94] pod "kube-controller-manager-embed-certs-616586" is "Ready"
	I1126 20:51:06.751174  214963 pod_ready.go:86] duration metric: took 180.124038ms for pod "kube-controller-manager-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:51:06.952195  214963 pod_ready.go:83] waiting for pod "kube-proxy-g5vk4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:51:07.352705  214963 pod_ready.go:94] pod "kube-proxy-g5vk4" is "Ready"
	I1126 20:51:07.352730  214963 pod_ready.go:86] duration metric: took 400.506094ms for pod "kube-proxy-g5vk4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:51:07.552802  214963 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:51:07.951751  214963 pod_ready.go:94] pod "kube-scheduler-embed-certs-616586" is "Ready"
	I1126 20:51:07.951774  214963 pod_ready.go:86] duration metric: took 398.946847ms for pod "kube-scheduler-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:51:07.951786  214963 pod_ready.go:40] duration metric: took 2.462971023s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:51:08.044833  214963 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 20:51:08.048700  214963 out.go:179] * Done! kubectl is now configured to use "embed-certs-616586" cluster and "default" namespace by default
	I1126 20:51:07.653604  219464 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:51:07.670515  219464 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1126 20:51:07.674340  219464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:51:07.684185  219464 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-538119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538119 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:51:07.684309  219464 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:51:07.684368  219464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:51:07.721276  219464 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:51:07.721300  219464 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:51:07.721354  219464 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:51:07.746979  219464 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:51:07.747001  219464 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:51:07.747010  219464 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1126 20:51:07.747103  219464 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-538119 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:51:07.747239  219464 ssh_runner.go:195] Run: crio config
	I1126 20:51:07.815575  219464 cni.go:84] Creating CNI manager for ""
	I1126 20:51:07.815601  219464 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:51:07.815717  219464 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:51:07.815748  219464 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-538119 NodeName:default-k8s-diff-port-538119 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:51:07.815936  219464 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-538119"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:51:07.816025  219464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:51:07.824014  219464 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:51:07.824101  219464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:51:07.831657  219464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1126 20:51:07.843959  219464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:51:07.857438  219464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1126 20:51:07.870362  219464 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:51:07.874296  219464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:51:07.884202  219464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:51:08.000462  219464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:51:08.026796  219464 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119 for IP: 192.168.76.2
	I1126 20:51:08.026822  219464 certs.go:195] generating shared ca certs ...
	I1126 20:51:08.026838  219464 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:08.026993  219464 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:51:08.027044  219464 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:51:08.027051  219464 certs.go:257] generating profile certs ...
	I1126 20:51:08.027105  219464 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.key
	I1126 20:51:08.027123  219464 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt with IP's: []
	I1126 20:51:08.730683  219464 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt ...
	I1126 20:51:08.730722  219464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: {Name:mkf96fb23b7aecb22d21f855a9870fe6ec015790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:08.730960  219464 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.key ...
	I1126 20:51:08.730981  219464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.key: {Name:mk4533d4e99287f2a7d6290e5b19d362edf21f0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:08.731137  219464 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.key.08a6970d
	I1126 20:51:08.731176  219464 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.crt.08a6970d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1126 20:51:08.963559  219464 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.crt.08a6970d ...
	I1126 20:51:08.963595  219464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.crt.08a6970d: {Name:mkbe15c47d9d3653ed6966e26d2940165da8a822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:08.963781  219464 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.key.08a6970d ...
	I1126 20:51:08.963795  219464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.key.08a6970d: {Name:mk44df1f4ce42548eaf0f273041f673cdcf136a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:08.963880  219464 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.crt.08a6970d -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.crt
	I1126 20:51:08.963962  219464 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.key.08a6970d -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.key
	I1126 20:51:08.964024  219464 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/proxy-client.key
	I1126 20:51:08.964042  219464 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/proxy-client.crt with IP's: []
	I1126 20:51:09.291185  219464 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/proxy-client.crt ...
	I1126 20:51:09.291220  219464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/proxy-client.crt: {Name:mkb8fe7d2b13d31f52cafd358b72ffc111f6e717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:09.291417  219464 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/proxy-client.key ...
	I1126 20:51:09.291430  219464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/proxy-client.key: {Name:mk95edda3b4659667537618923d0199dc881a8bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:09.291640  219464 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:51:09.291688  219464 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:51:09.291702  219464 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:51:09.291730  219464 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:51:09.291760  219464 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:51:09.291787  219464 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:51:09.291836  219464 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:51:09.292406  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:51:09.313531  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:51:09.336550  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:51:09.356232  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:51:09.375626  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1126 20:51:09.393883  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:51:09.413676  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:51:09.431650  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:51:09.448587  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:51:09.466432  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:51:09.483739  219464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:51:09.507031  219464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:51:09.521395  219464 ssh_runner.go:195] Run: openssl version
	I1126 20:51:09.528505  219464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:51:09.538390  219464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:51:09.542569  219464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:51:09.542652  219464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:51:09.583753  219464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:51:09.592044  219464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:51:09.600611  219464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:51:09.604638  219464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:51:09.604727  219464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:51:09.645844  219464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:51:09.654489  219464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:51:09.663331  219464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:51:09.667350  219464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:51:09.667415  219464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:51:09.713566  219464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:51:09.722101  219464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:51:09.726527  219464 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:51:09.726630  219464 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-538119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538119 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:51:09.726760  219464 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:51:09.726847  219464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:51:09.765189  219464 cri.go:89] found id: ""
	I1126 20:51:09.765302  219464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:51:09.775523  219464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:51:09.783871  219464 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:51:09.783987  219464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:51:09.792182  219464 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:51:09.792202  219464 kubeadm.go:158] found existing configuration files:
	
	I1126 20:51:09.792254  219464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1126 20:51:09.799952  219464 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:51:09.800052  219464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:51:09.807632  219464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1126 20:51:09.815592  219464 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:51:09.815682  219464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:51:09.823154  219464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1126 20:51:09.830582  219464 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:51:09.830645  219464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:51:09.837623  219464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1126 20:51:09.845215  219464 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:51:09.845304  219464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:51:09.852909  219464 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:51:09.896075  219464 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 20:51:09.896150  219464 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:51:09.919621  219464 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:51:09.919715  219464 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1126 20:51:09.919774  219464 kubeadm.go:319] OS: Linux
	I1126 20:51:09.919838  219464 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:51:09.919900  219464 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1126 20:51:09.919963  219464 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:51:09.920028  219464 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:51:09.920089  219464 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:51:09.920153  219464 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:51:09.920203  219464 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:51:09.920273  219464 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:51:09.920335  219464 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1126 20:51:09.995999  219464 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:51:09.996128  219464 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:51:09.996231  219464 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 20:51:10.006360  219464 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1126 20:51:10.014492  219464 out.go:252]   - Generating certificates and keys ...
	I1126 20:51:10.014617  219464 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:51:10.014706  219464 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:51:10.458546  219464 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:51:10.557154  219464 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:51:10.885070  219464 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:51:11.128955  219464 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:51:11.275898  219464 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:51:11.276181  219464 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-538119 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1126 20:51:11.524384  219464 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:51:11.524653  219464 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-538119 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1126 20:51:12.770458  219464 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:51:13.358364  219464 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:51:14.152884  219464 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:51:14.153140  219464 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:51:14.518128  219464 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:51:14.774636  219464 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 20:51:15.287877  219464 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:51:15.777533  219464 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:51:16.780914  219464 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:51:16.781762  219464 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:51:16.784349  219464 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Nov 26 20:51:05 embed-certs-616586 crio[837]: time="2025-11-26T20:51:05.307718411Z" level=info msg="Created container ed35ff1e59b961c2fba87f2abafe79f24c1722d8ec1db8b2b6d8668e93446c15: kube-system/coredns-66bc5c9577-lmmqs/coredns" id=fece8043-4de2-464b-8212-bd9d3e6f6199 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:51:05 embed-certs-616586 crio[837]: time="2025-11-26T20:51:05.308466335Z" level=info msg="Starting container: ed35ff1e59b961c2fba87f2abafe79f24c1722d8ec1db8b2b6d8668e93446c15" id=abaa9e95-9891-4ec6-b571-44bca03858e7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:51:05 embed-certs-616586 crio[837]: time="2025-11-26T20:51:05.310916338Z" level=info msg="Started container" PID=1719 containerID=ed35ff1e59b961c2fba87f2abafe79f24c1722d8ec1db8b2b6d8668e93446c15 description=kube-system/coredns-66bc5c9577-lmmqs/coredns id=abaa9e95-9891-4ec6-b571-44bca03858e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39ea37e8fe661a3a159f793447335a676405c36b63da32a0ff47e08997adc48c
	Nov 26 20:51:08 embed-certs-616586 crio[837]: time="2025-11-26T20:51:08.618690654Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b3add29e-052d-43fa-978e-20b774902823 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:51:08 embed-certs-616586 crio[837]: time="2025-11-26T20:51:08.618768092Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:51:08 embed-certs-616586 crio[837]: time="2025-11-26T20:51:08.630381136Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c81f5220ce392f6f3562304262b9e58f9e590cb18f2b884c43a82b07cc8839e1 UID:350da55d-5536-49f4-9d13-9fdd1bb3c7de NetNS:/var/run/netns/62a14c2e-efee-4813-bcc5-44f54ce5c7c6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b0f0}] Aliases:map[]}"
	Nov 26 20:51:08 embed-certs-616586 crio[837]: time="2025-11-26T20:51:08.630556352Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 26 20:51:08 embed-certs-616586 crio[837]: time="2025-11-26T20:51:08.639192262Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c81f5220ce392f6f3562304262b9e58f9e590cb18f2b884c43a82b07cc8839e1 UID:350da55d-5536-49f4-9d13-9fdd1bb3c7de NetNS:/var/run/netns/62a14c2e-efee-4813-bcc5-44f54ce5c7c6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012b0f0}] Aliases:map[]}"
	Nov 26 20:51:08 embed-certs-616586 crio[837]: time="2025-11-26T20:51:08.639332295Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 26 20:51:08 embed-certs-616586 crio[837]: time="2025-11-26T20:51:08.641768242Z" level=info msg="Ran pod sandbox c81f5220ce392f6f3562304262b9e58f9e590cb18f2b884c43a82b07cc8839e1 with infra container: default/busybox/POD" id=b3add29e-052d-43fa-978e-20b774902823 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:51:08 embed-certs-616586 crio[837]: time="2025-11-26T20:51:08.6428611Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5c6b7a8d-5fb0-4e8e-9e64-e993baa8e024 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:51:08 embed-certs-616586 crio[837]: time="2025-11-26T20:51:08.642972038Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5c6b7a8d-5fb0-4e8e-9e64-e993baa8e024 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:51:08 embed-certs-616586 crio[837]: time="2025-11-26T20:51:08.643010577Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5c6b7a8d-5fb0-4e8e-9e64-e993baa8e024 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:51:08 embed-certs-616586 crio[837]: time="2025-11-26T20:51:08.64882651Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a4e84811-0712-4d5d-abe2-6fe3aa472dd7 name=/runtime.v1.ImageService/PullImage
	Nov 26 20:51:08 embed-certs-616586 crio[837]: time="2025-11-26T20:51:08.652131937Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 26 20:51:10 embed-certs-616586 crio[837]: time="2025-11-26T20:51:10.761502272Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=a4e84811-0712-4d5d-abe2-6fe3aa472dd7 name=/runtime.v1.ImageService/PullImage
	Nov 26 20:51:10 embed-certs-616586 crio[837]: time="2025-11-26T20:51:10.763924731Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a3672a81-d20c-469f-8f3a-d59c98fd974b name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:51:10 embed-certs-616586 crio[837]: time="2025-11-26T20:51:10.769436772Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a23b1454-ec47-48ef-9258-11b4397031d3 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:51:10 embed-certs-616586 crio[837]: time="2025-11-26T20:51:10.777990076Z" level=info msg="Creating container: default/busybox/busybox" id=d9c3e516-574a-4f56-859e-ab83d74b7b0e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:51:10 embed-certs-616586 crio[837]: time="2025-11-26T20:51:10.77827316Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:51:10 embed-certs-616586 crio[837]: time="2025-11-26T20:51:10.783872493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:51:10 embed-certs-616586 crio[837]: time="2025-11-26T20:51:10.784654328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:51:10 embed-certs-616586 crio[837]: time="2025-11-26T20:51:10.804101247Z" level=info msg="Created container f1f0a8b4228f6a368cddbb5dfe4573b04d54ac17df666ee1dc239ce8681fc8c4: default/busybox/busybox" id=d9c3e516-574a-4f56-859e-ab83d74b7b0e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:51:10 embed-certs-616586 crio[837]: time="2025-11-26T20:51:10.813346781Z" level=info msg="Starting container: f1f0a8b4228f6a368cddbb5dfe4573b04d54ac17df666ee1dc239ce8681fc8c4" id=1a7d9da6-a333-4c40-a065-983140e4289d name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:51:10 embed-certs-616586 crio[837]: time="2025-11-26T20:51:10.816347947Z" level=info msg="Started container" PID=1774 containerID=f1f0a8b4228f6a368cddbb5dfe4573b04d54ac17df666ee1dc239ce8681fc8c4 description=default/busybox/busybox id=1a7d9da6-a333-4c40-a065-983140e4289d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c81f5220ce392f6f3562304262b9e58f9e590cb18f2b884c43a82b07cc8839e1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	f1f0a8b4228f6       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   c81f5220ce392       busybox                                      default
	ed35ff1e59b96       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 seconds ago       Running             coredns                   0                   39ea37e8fe661       coredns-66bc5c9577-lmmqs                     kube-system
	397888f0ec12d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 seconds ago       Running             storage-provisioner       0                   f8acd26cd30ac       storage-provisioner                          kube-system
	cb1e6f1c68ed6       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      56 seconds ago       Running             kube-proxy                0                   f39663ce5e9b9       kube-proxy-g5vk4                             kube-system
	69194b777e5ab       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      56 seconds ago       Running             kindnet-cni               0                   c5345977875a1       kindnet-5zbx9                                kube-system
	09727182292c5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   ddae37d67dd8f       kube-controller-manager-embed-certs-616586   kube-system
	467de9bb922bc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   11c620e3136ac       kube-apiserver-embed-certs-616586            kube-system
	eee03cf96b443       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   c5b700f1c69a0       etcd-embed-certs-616586                      kube-system
	1883265d968e4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   490cbc6b7a121       kube-scheduler-embed-certs-616586            kube-system
	
	
	==> coredns [ed35ff1e59b961c2fba87f2abafe79f24c1722d8ec1db8b2b6d8668e93446c15] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37256 - 60662 "HINFO IN 724620588318579622.8077054493224773830. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015378658s
	
	
	==> describe nodes <==
	Name:               embed-certs-616586
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-616586
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=embed-certs-616586
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_50_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:50:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-616586
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:51:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:51:19 +0000   Wed, 26 Nov 2025 20:50:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:51:19 +0000   Wed, 26 Nov 2025 20:50:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:51:19 +0000   Wed, 26 Nov 2025 20:50:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:51:19 +0000   Wed, 26 Nov 2025 20:51:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-616586
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                dbf22ae5-72fe-466d-9fb8-0a6db34daaea
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-lmmqs                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-embed-certs-616586                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-5zbx9                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-616586             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-616586    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-g5vk4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-616586             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 56s   kube-proxy       
	  Normal   Starting                 62s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s   kubelet          Node embed-certs-616586 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s   kubelet          Node embed-certs-616586 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s   kubelet          Node embed-certs-616586 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s   node-controller  Node embed-certs-616586 event: Registered Node embed-certs-616586 in Controller
	  Normal   NodeReady                16s   kubelet          Node embed-certs-616586 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov26 20:24] overlayfs: idmapped layers are currently not supported
	[Nov26 20:25] overlayfs: idmapped layers are currently not supported
	[Nov26 20:27] overlayfs: idmapped layers are currently not supported
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	[Nov26 20:48] overlayfs: idmapped layers are currently not supported
	[Nov26 20:49] overlayfs: idmapped layers are currently not supported
	[Nov26 20:50] overlayfs: idmapped layers are currently not supported
	[Nov26 20:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [eee03cf96b443bb57280e4833921cfaac2c408ba73a2eeef644e70f2c03b27a8] <==
	{"level":"warn","ts":"2025-11-26T20:50:13.876186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:13.902903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:13.914452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:13.934077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:13.952548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:13.971375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.009675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.049258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.071871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.115643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.133138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.152492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.173665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.197508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.208716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.232257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.249881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.282795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.290587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.307425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.333502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.361044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.382762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.405546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:50:14.521892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59166","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:51:20 up  1:33,  0 user,  load average: 3.13, 3.09, 2.51
	Linux embed-certs-616586 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [69194b777e5ab9ff27aad32ddf0a16bda66cb2f0f8258da967cf6f181584d472] <==
	I1126 20:50:24.028953       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:50:24.029246       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:50:24.029364       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:50:24.029375       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:50:24.029385       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:50:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:50:24.230028       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:50:24.230047       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:50:24.230064       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:50:24.230388       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:50:54.230224       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:50:54.230393       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:50:54.230420       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:50:54.234864       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1126 20:50:55.831286       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:50:55.831323       1 metrics.go:72] Registering metrics
	I1126 20:50:55.831402       1 controller.go:711] "Syncing nftables rules"
	I1126 20:51:04.234630       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:51:04.234684       1 main.go:301] handling current node
	I1126 20:51:14.229973       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:51:14.230011       1 main.go:301] handling current node
	
	
	==> kube-apiserver [467de9bb922bcbe61856d90ac8607306d913d822b8d57484848446d435fa0bc7] <==
	E1126 20:50:15.505567       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1126 20:50:15.525530       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1126 20:50:15.569561       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:50:15.579739       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1126 20:50:15.581887       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:50:15.635616       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:50:15.638588       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:50:15.764293       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:50:16.254037       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1126 20:50:16.266034       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1126 20:50:16.266057       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:50:17.008585       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:50:17.066338       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:50:17.169645       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1126 20:50:17.180263       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1126 20:50:17.181402       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:50:17.186510       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:50:17.402852       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:50:18.106213       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:50:18.128531       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1126 20:50:18.140404       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:50:23.059574       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:50:23.064201       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:50:23.336909       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1126 20:50:23.588999       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [09727182292c52a66b1551666098b29ee8aa58c738454aaca86cbd4c9ffef1ac] <==
	I1126 20:50:22.419666       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-616586" podCIDRs=["10.244.0.0/24"]
	I1126 20:50:22.421058       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:50:22.430475       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:50:22.435723       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:50:22.439812       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1126 20:50:22.439871       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1126 20:50:22.439909       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1126 20:50:22.439890       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1126 20:50:22.443221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1126 20:50:22.450008       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:50:22.450051       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:50:22.450504       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:50:22.450522       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:50:22.452243       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:50:22.452323       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 20:50:22.452590       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 20:50:22.452622       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:50:22.452987       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:50:22.454164       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:50:22.454357       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:50:22.458809       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:50:22.460017       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:50:22.462151       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1126 20:50:22.462157       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:51:07.408655       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cb1e6f1c68ed67b9df70de3afba1b861b1b9ff792f3dbd95c0b75e95ee9c6a2b] <==
	I1126 20:50:24.356324       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:50:24.460764       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:50:24.562813       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:50:24.563538       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1126 20:50:24.563722       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:50:24.599461       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:50:24.599590       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:50:24.607713       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:50:24.608332       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:50:24.609111       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:50:24.611113       1 config.go:200] "Starting service config controller"
	I1126 20:50:24.611189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:50:24.611233       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:50:24.611261       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:50:24.611325       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:50:24.611354       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:50:24.612337       1 config.go:309] "Starting node config controller"
	I1126 20:50:24.612395       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:50:24.612425       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:50:24.713065       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:50:24.713115       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 20:50:24.713150       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1883265d968e48074fcf46f80c65da9a3f1d124238adca15c185826aaaa7dbea] <==
	I1126 20:50:15.713827       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1126 20:50:15.763267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 20:50:15.763560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 20:50:15.763803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:50:15.763884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 20:50:15.764226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 20:50:15.764319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:50:15.764394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 20:50:15.764463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 20:50:15.764528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 20:50:15.764666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:50:15.764839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 20:50:15.765106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:50:15.765437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:50:15.765552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 20:50:15.765675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 20:50:15.765716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 20:50:15.766969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 20:50:16.613060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 20:50:16.627411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:50:16.635941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 20:50:16.649829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:50:16.671755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:50:16.844177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1126 20:50:18.913406       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:50:19 embed-certs-616586 kubelet[1297]: I1126 20:50:19.242978    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-616586" podStartSLOduration=1.242956096 podStartE2EDuration="1.242956096s" podCreationTimestamp="2025-11-26 20:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:50:19.227827087 +0000 UTC m=+1.286512984" watchObservedRunningTime="2025-11-26 20:50:19.242956096 +0000 UTC m=+1.301641993"
	Nov 26 20:50:19 embed-certs-616586 kubelet[1297]: I1126 20:50:19.261289    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-616586" podStartSLOduration=2.261259686 podStartE2EDuration="2.261259686s" podCreationTimestamp="2025-11-26 20:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:50:19.243788423 +0000 UTC m=+1.302474345" watchObservedRunningTime="2025-11-26 20:50:19.261259686 +0000 UTC m=+1.319945583"
	Nov 26 20:50:19 embed-certs-616586 kubelet[1297]: I1126 20:50:19.261421    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-616586" podStartSLOduration=1.261414898 podStartE2EDuration="1.261414898s" podCreationTimestamp="2025-11-26 20:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:50:19.260791245 +0000 UTC m=+1.319477151" watchObservedRunningTime="2025-11-26 20:50:19.261414898 +0000 UTC m=+1.320100795"
	Nov 26 20:50:22 embed-certs-616586 kubelet[1297]: I1126 20:50:22.460546    1297 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 26 20:50:22 embed-certs-616586 kubelet[1297]: I1126 20:50:22.461255    1297 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 26 20:50:23 embed-certs-616586 kubelet[1297]: I1126 20:50:23.527567    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556-xtables-lock\") pod \"kindnet-5zbx9\" (UID: \"d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556\") " pod="kube-system/kindnet-5zbx9"
	Nov 26 20:50:23 embed-certs-616586 kubelet[1297]: I1126 20:50:23.527608    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556-lib-modules\") pod \"kindnet-5zbx9\" (UID: \"d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556\") " pod="kube-system/kindnet-5zbx9"
	Nov 26 20:50:23 embed-certs-616586 kubelet[1297]: I1126 20:50:23.527640    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556-cni-cfg\") pod \"kindnet-5zbx9\" (UID: \"d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556\") " pod="kube-system/kindnet-5zbx9"
	Nov 26 20:50:23 embed-certs-616586 kubelet[1297]: I1126 20:50:23.527661    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2ns\" (UniqueName: \"kubernetes.io/projected/d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556-kube-api-access-4z2ns\") pod \"kindnet-5zbx9\" (UID: \"d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556\") " pod="kube-system/kindnet-5zbx9"
	Nov 26 20:50:23 embed-certs-616586 kubelet[1297]: I1126 20:50:23.628628    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/711e6b5c-eac4-4b0c-9a50-22ddb3b73c53-xtables-lock\") pod \"kube-proxy-g5vk4\" (UID: \"711e6b5c-eac4-4b0c-9a50-22ddb3b73c53\") " pod="kube-system/kube-proxy-g5vk4"
	Nov 26 20:50:23 embed-certs-616586 kubelet[1297]: I1126 20:50:23.628707    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/711e6b5c-eac4-4b0c-9a50-22ddb3b73c53-kube-proxy\") pod \"kube-proxy-g5vk4\" (UID: \"711e6b5c-eac4-4b0c-9a50-22ddb3b73c53\") " pod="kube-system/kube-proxy-g5vk4"
	Nov 26 20:50:23 embed-certs-616586 kubelet[1297]: I1126 20:50:23.628765    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/711e6b5c-eac4-4b0c-9a50-22ddb3b73c53-lib-modules\") pod \"kube-proxy-g5vk4\" (UID: \"711e6b5c-eac4-4b0c-9a50-22ddb3b73c53\") " pod="kube-system/kube-proxy-g5vk4"
	Nov 26 20:50:23 embed-certs-616586 kubelet[1297]: I1126 20:50:23.628787    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8kpn\" (UniqueName: \"kubernetes.io/projected/711e6b5c-eac4-4b0c-9a50-22ddb3b73c53-kube-api-access-p8kpn\") pod \"kube-proxy-g5vk4\" (UID: \"711e6b5c-eac4-4b0c-9a50-22ddb3b73c53\") " pod="kube-system/kube-proxy-g5vk4"
	Nov 26 20:50:23 embed-certs-616586 kubelet[1297]: I1126 20:50:23.666456    1297 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 26 20:50:24 embed-certs-616586 kubelet[1297]: I1126 20:50:24.295248    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g5vk4" podStartSLOduration=1.286307016 podStartE2EDuration="1.286307016s" podCreationTimestamp="2025-11-26 20:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:50:24.265585036 +0000 UTC m=+6.324270941" watchObservedRunningTime="2025-11-26 20:50:24.286307016 +0000 UTC m=+6.344992905"
	Nov 26 20:50:25 embed-certs-616586 kubelet[1297]: I1126 20:50:25.672593    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5zbx9" podStartSLOduration=2.672574019 podStartE2EDuration="2.672574019s" podCreationTimestamp="2025-11-26 20:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:50:24.295585668 +0000 UTC m=+6.354271557" watchObservedRunningTime="2025-11-26 20:50:25.672574019 +0000 UTC m=+7.731259908"
	Nov 26 20:51:04 embed-certs-616586 kubelet[1297]: I1126 20:51:04.786955    1297 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 26 20:51:04 embed-certs-616586 kubelet[1297]: I1126 20:51:04.950866    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b9cb74e-e5f6-413d-918a-66872e539adf-config-volume\") pod \"coredns-66bc5c9577-lmmqs\" (UID: \"8b9cb74e-e5f6-413d-918a-66872e539adf\") " pod="kube-system/coredns-66bc5c9577-lmmqs"
	Nov 26 20:51:04 embed-certs-616586 kubelet[1297]: I1126 20:51:04.950921    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bq9x\" (UniqueName: \"kubernetes.io/projected/8b9cb74e-e5f6-413d-918a-66872e539adf-kube-api-access-6bq9x\") pod \"coredns-66bc5c9577-lmmqs\" (UID: \"8b9cb74e-e5f6-413d-918a-66872e539adf\") " pod="kube-system/coredns-66bc5c9577-lmmqs"
	Nov 26 20:51:04 embed-certs-616586 kubelet[1297]: I1126 20:51:04.950948    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ceee294c-4db0-4dc0-888c-e3733a2592cb-tmp\") pod \"storage-provisioner\" (UID: \"ceee294c-4db0-4dc0-888c-e3733a2592cb\") " pod="kube-system/storage-provisioner"
	Nov 26 20:51:04 embed-certs-616586 kubelet[1297]: I1126 20:51:04.950965    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bk92\" (UniqueName: \"kubernetes.io/projected/ceee294c-4db0-4dc0-888c-e3733a2592cb-kube-api-access-7bk92\") pod \"storage-provisioner\" (UID: \"ceee294c-4db0-4dc0-888c-e3733a2592cb\") " pod="kube-system/storage-provisioner"
	Nov 26 20:51:05 embed-certs-616586 kubelet[1297]: W1126 20:51:05.258412    1297 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/crio-39ea37e8fe661a3a159f793447335a676405c36b63da32a0ff47e08997adc48c WatchSource:0}: Error finding container 39ea37e8fe661a3a159f793447335a676405c36b63da32a0ff47e08997adc48c: Status 404 returned error can't find the container with id 39ea37e8fe661a3a159f793447335a676405c36b63da32a0ff47e08997adc48c
	Nov 26 20:51:05 embed-certs-616586 kubelet[1297]: I1126 20:51:05.403326    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.403305478 podStartE2EDuration="41.403305478s" podCreationTimestamp="2025-11-26 20:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:51:05.381340629 +0000 UTC m=+47.440026518" watchObservedRunningTime="2025-11-26 20:51:05.403305478 +0000 UTC m=+47.461991375"
	Nov 26 20:51:06 embed-certs-616586 kubelet[1297]: I1126 20:51:06.352349    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lmmqs" podStartSLOduration=43.352330197 podStartE2EDuration="43.352330197s" podCreationTimestamp="2025-11-26 20:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:51:05.405133953 +0000 UTC m=+47.463819850" watchObservedRunningTime="2025-11-26 20:51:06.352330197 +0000 UTC m=+48.411016086"
	Nov 26 20:51:08 embed-certs-616586 kubelet[1297]: I1126 20:51:08.382194    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjfg8\" (UniqueName: \"kubernetes.io/projected/350da55d-5536-49f4-9d13-9fdd1bb3c7de-kube-api-access-fjfg8\") pod \"busybox\" (UID: \"350da55d-5536-49f4-9d13-9fdd1bb3c7de\") " pod="default/busybox"
	
	
	==> storage-provisioner [397888f0ec12dafa1b869a866bd9b1ebd0fc06cfbf99aba2b462a1fd954ef956] <==
	I1126 20:51:05.305990       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:51:05.437279       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:51:05.437492       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:51:05.478409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:05.491785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:51:05.492993       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:51:05.493244       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-616586_bf2e500c-097f-4218-9254-662363106890!
	I1126 20:51:05.494528       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74d92165-f92e-42f6-bb51-54e16bfb29a8", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-616586_bf2e500c-097f-4218-9254-662363106890 became leader
	W1126 20:51:05.507516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:05.512918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:51:05.597660       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-616586_bf2e500c-097f-4218-9254-662363106890!
	W1126 20:51:07.516655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:07.524376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:09.528926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:09.535447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:11.539565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:11.547909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:13.551581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:13.557190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:15.561293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:15.567177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:17.571092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:17.578036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:19.594133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:51:19.602276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-616586 -n embed-certs-616586
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-616586 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-538119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-538119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (270.347018ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:52:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-538119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-538119 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-538119 describe deploy/metrics-server -n kube-system: exit status 1 (86.848982ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-538119 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-538119
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-538119:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45",
	        "Created": "2025-11-26T20:51:00.643686103Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 219858,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:51:00.699274633Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/hostname",
	        "HostsPath": "/var/lib/docker/containers/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/hosts",
	        "LogPath": "/var/lib/docker/containers/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45-json.log",
	        "Name": "/default-k8s-diff-port-538119",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-538119:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-538119",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45",
	                "LowerDir": "/var/lib/docker/overlay2/1fa0634dae07369695cdbc978c5931db6f7285748bd04ee866489bb21cee8f25-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fa0634dae07369695cdbc978c5931db6f7285748bd04ee866489bb21cee8f25/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fa0634dae07369695cdbc978c5931db6f7285748bd04ee866489bb21cee8f25/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fa0634dae07369695cdbc978c5931db6f7285748bd04ee866489bb21cee8f25/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-538119",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-538119/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-538119",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-538119",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-538119",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5f888d9a33fc6103fa51f75f77a435ba3f3124f4c0dbe7ac91db3a25047c92f4",
	            "SandboxKey": "/var/run/docker/netns/5f888d9a33fc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-538119": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:5a:15:58:80:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58099cffa65b0cb809ecb55668d778b1399828737559d8aaf8663745e845c3ba",
	                    "EndpointID": "4effbe247a8037fd6d65cf44bb66370016a8494796eb917b5e8256812f2fe5f3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-538119",
	                        "0376b85fe7a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-538119 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-538119 logs -n 25: (1.202754098s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-264537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:46 UTC │
	│ start   │ -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:46 UTC │ 26 Nov 25 20:47 UTC │
	│ start   │ -p cert-expiration-164741 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-164741       │ jenkins │ v1.37.0 │ 26 Nov 25 20:47 UTC │ 26 Nov 25 20:49 UTC │
	│ image   │ old-k8s-version-264537 image list --format=json                                                                                                                                                                                               │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ pause   │ -p old-k8s-version-264537 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │                     │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                                                                                     │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                                                                                     │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable metrics-server -p no-preload-956694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │                     │
	│ stop    │ -p no-preload-956694 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable dashboard -p no-preload-956694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p cert-expiration-164741                                                                                                                                                                                                                     │ cert-expiration-164741       │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:51 UTC │
	│ image   │ no-preload-956694 image list --format=json                                                                                                                                                                                                    │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ pause   │ -p no-preload-956694 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │                     │
	│ delete  │ -p no-preload-956694                                                                                                                                                                                                                          │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p no-preload-956694                                                                                                                                                                                                                          │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p disable-driver-mounts-180932                                                                                                                                                                                                               │ disable-driver-mounts-180932 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-616586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │                     │
	│ stop    │ -p embed-certs-616586 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-616586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:51:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:51:34.982948  222763 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:51:34.983068  222763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:51:34.983076  222763 out.go:374] Setting ErrFile to fd 2...
	I1126 20:51:34.983081  222763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:51:34.983336  222763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:51:34.983720  222763 out.go:368] Setting JSON to false
	I1126 20:51:34.984602  222763 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5625,"bootTime":1764184670,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:51:34.984664  222763 start.go:143] virtualization:  
	I1126 20:51:34.988580  222763 out.go:179] * [embed-certs-616586] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:51:34.991714  222763 notify.go:221] Checking for updates...
	I1126 20:51:34.992284  222763 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:51:34.995184  222763 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:51:34.998089  222763 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:51:35.001377  222763 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:51:35.004296  222763 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:51:35.007237  222763 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:51:35.011298  222763 config.go:182] Loaded profile config "embed-certs-616586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:51:35.012024  222763 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:51:35.077813  222763 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:51:35.077903  222763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:51:35.175257  222763 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:51:35.160734332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:51:35.175360  222763 docker.go:319] overlay module found
	I1126 20:51:35.178897  222763 out.go:179] * Using the docker driver based on existing profile
	I1126 20:51:34.059237  219464 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:51:34.059259  219464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:51:34.059324  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:34.099089  219464 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:51:34.099111  219464 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:51:34.099173  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:34.113790  219464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:51:34.134241  219464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:51:34.387953  219464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:51:34.388071  219464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:51:34.429184  219464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:51:34.579554  219464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:51:35.157208  219464 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-538119" to be "Ready" ...
	I1126 20:51:35.158092  219464 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1126 20:51:35.181036  222763 start.go:309] selected driver: docker
	I1126 20:51:35.181054  222763 start.go:927] validating driver "docker" against &{Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:51:35.181147  222763 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:51:35.181819  222763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:51:35.283037  222763 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:51:35.266387513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:51:35.283346  222763 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:51:35.283377  222763 cni.go:84] Creating CNI manager for ""
	I1126 20:51:35.283439  222763 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:51:35.283477  222763 start.go:353] cluster config:
	{Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:51:35.286704  222763 out.go:179] * Starting "embed-certs-616586" primary control-plane node in "embed-certs-616586" cluster
	I1126 20:51:35.288760  222763 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:51:35.291134  222763 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:51:35.294928  222763 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:51:35.294973  222763 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:51:35.294994  222763 cache.go:65] Caching tarball of preloaded images
	I1126 20:51:35.295075  222763 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:51:35.295088  222763 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:51:35.295327  222763 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:51:35.295460  222763 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/config.json ...
	I1126 20:51:35.332798  222763 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:51:35.332819  222763 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:51:35.332832  222763 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:51:35.332866  222763 start.go:360] acquireMachinesLock for embed-certs-616586: {Name:mka5254437f68c39e0c98d2ff47cae58581678c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:51:35.332924  222763 start.go:364] duration metric: took 41.066µs to acquireMachinesLock for "embed-certs-616586"
	I1126 20:51:35.332942  222763 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:51:35.332947  222763 fix.go:54] fixHost starting: 
	I1126 20:51:35.333200  222763 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:51:35.367540  222763 fix.go:112] recreateIfNeeded on embed-certs-616586: state=Stopped err=<nil>
	W1126 20:51:35.367569  222763 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:51:35.553883  219464 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124666254s)
	I1126 20:51:35.571247  219464 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1126 20:51:35.370884  222763 out.go:252] * Restarting existing docker container for "embed-certs-616586" ...
	I1126 20:51:35.370969  222763 cli_runner.go:164] Run: docker start embed-certs-616586
	I1126 20:51:35.677340  222763 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:51:35.698636  222763 kic.go:430] container "embed-certs-616586" state is running.
	I1126 20:51:35.699146  222763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-616586
	I1126 20:51:35.718559  222763 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/config.json ...
	I1126 20:51:35.718780  222763 machine.go:94] provisionDockerMachine start ...
	I1126 20:51:35.718845  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:35.740580  222763 main.go:143] libmachine: Using SSH client type: native
	I1126 20:51:35.740916  222763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:51:35.740925  222763 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:51:35.741654  222763 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1126 20:51:38.893253  222763 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-616586
	
	I1126 20:51:38.893274  222763 ubuntu.go:182] provisioning hostname "embed-certs-616586"
	I1126 20:51:38.893355  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:38.911175  222763 main.go:143] libmachine: Using SSH client type: native
	I1126 20:51:38.911503  222763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:51:38.911520  222763 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-616586 && echo "embed-certs-616586" | sudo tee /etc/hostname
	I1126 20:51:39.075419  222763 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-616586
	
	I1126 20:51:39.075497  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:39.093911  222763 main.go:143] libmachine: Using SSH client type: native
	I1126 20:51:39.094255  222763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:51:39.094279  222763 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-616586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-616586/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-616586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:51:39.242144  222763 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:51:39.242171  222763 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:51:39.242205  222763 ubuntu.go:190] setting up certificates
	I1126 20:51:39.242218  222763 provision.go:84] configureAuth start
	I1126 20:51:39.242297  222763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-616586
	I1126 20:51:39.260533  222763 provision.go:143] copyHostCerts
	I1126 20:51:39.260606  222763 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:51:39.260621  222763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:51:39.260698  222763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:51:39.260872  222763 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:51:39.260887  222763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:51:39.260924  222763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:51:39.261030  222763 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:51:39.261040  222763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:51:39.261076  222763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:51:39.261161  222763 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.embed-certs-616586 san=[127.0.0.1 192.168.85.2 embed-certs-616586 localhost minikube]
	I1126 20:51:39.549353  222763 provision.go:177] copyRemoteCerts
	I1126 20:51:39.549453  222763 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:51:39.549499  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:39.569897  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:39.678215  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:51:39.697984  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:51:39.716480  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:51:39.734318  222763 provision.go:87] duration metric: took 492.069234ms to configureAuth
	I1126 20:51:39.734386  222763 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:51:39.734601  222763 config.go:182] Loaded profile config "embed-certs-616586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:51:39.734710  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:39.751866  222763 main.go:143] libmachine: Using SSH client type: native
	I1126 20:51:39.752173  222763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:51:39.752193  222763 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:51:35.574117  219464 addons.go:530] duration metric: took 1.573580643s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 20:51:35.668674  219464 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-538119" context rescaled to 1 replicas
	W1126 20:51:37.160007  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:39.162030  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	I1126 20:51:40.134935  222763 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:51:40.134964  222763 machine.go:97] duration metric: took 4.416175619s to provisionDockerMachine
	I1126 20:51:40.134985  222763 start.go:293] postStartSetup for "embed-certs-616586" (driver="docker")
	I1126 20:51:40.134997  222763 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:51:40.135095  222763 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:51:40.135160  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:40.156713  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:40.265896  222763 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:51:40.269418  222763 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:51:40.269448  222763 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:51:40.269460  222763 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:51:40.269521  222763 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:51:40.269625  222763 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:51:40.269742  222763 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:51:40.277448  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:51:40.295113  222763 start.go:296] duration metric: took 160.111239ms for postStartSetup
	I1126 20:51:40.295205  222763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:51:40.295251  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:40.312426  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:40.415220  222763 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:51:40.420079  222763 fix.go:56] duration metric: took 5.087124917s for fixHost
	I1126 20:51:40.420106  222763 start.go:83] releasing machines lock for "embed-certs-616586", held for 5.087173121s
	I1126 20:51:40.420191  222763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-616586
	I1126 20:51:40.436476  222763 ssh_runner.go:195] Run: cat /version.json
	I1126 20:51:40.436512  222763 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:51:40.436527  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:40.436574  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:40.456533  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:40.467682  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:40.557485  222763 ssh_runner.go:195] Run: systemctl --version
	I1126 20:51:40.651728  222763 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:51:40.691899  222763 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:51:40.696286  222763 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:51:40.696365  222763 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:51:40.704174  222763 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:51:40.704203  222763 start.go:496] detecting cgroup driver to use...
	I1126 20:51:40.704236  222763 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:51:40.704291  222763 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:51:40.719294  222763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:51:40.732386  222763 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:51:40.732448  222763 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:51:40.747871  222763 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:51:40.761243  222763 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:51:40.886870  222763 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:51:41.008368  222763 docker.go:234] disabling docker service ...
	I1126 20:51:41.008466  222763 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:51:41.024840  222763 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:51:41.038267  222763 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:51:41.153170  222763 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:51:41.270701  222763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:51:41.285626  222763 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:51:41.300508  222763 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:51:41.300613  222763 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.309572  222763 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:51:41.309680  222763 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.319652  222763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.328022  222763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.337453  222763 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:51:41.346073  222763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.355345  222763 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.363474  222763 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.372034  222763 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:51:41.379505  222763 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:51:41.386947  222763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:51:41.502558  222763 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:51:41.685225  222763 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:51:41.685306  222763 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:51:41.689354  222763 start.go:564] Will wait 60s for crictl version
	I1126 20:51:41.689421  222763 ssh_runner.go:195] Run: which crictl
	I1126 20:51:41.692882  222763 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:51:41.719592  222763 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:51:41.719681  222763 ssh_runner.go:195] Run: crio --version
	I1126 20:51:41.750281  222763 ssh_runner.go:195] Run: crio --version
	I1126 20:51:41.782914  222763 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:51:41.785907  222763 cli_runner.go:164] Run: docker network inspect embed-certs-616586 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:51:41.802005  222763 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:51:41.805667  222763 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:51:41.814939  222763 kubeadm.go:884] updating cluster {Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:51:41.815068  222763 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:51:41.815119  222763 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:51:41.858583  222763 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:51:41.858605  222763 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:51:41.858666  222763 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:51:41.888178  222763 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:51:41.888198  222763 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:51:41.888206  222763 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1126 20:51:41.888311  222763 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-616586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:51:41.888393  222763 ssh_runner.go:195] Run: crio config
	I1126 20:51:41.939234  222763 cni.go:84] Creating CNI manager for ""
	I1126 20:51:41.939258  222763 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:51:41.939282  222763 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:51:41.939308  222763 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-616586 NodeName:embed-certs-616586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:51:41.939434  222763 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-616586"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:51:41.939511  222763 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:51:41.947129  222763 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:51:41.947200  222763 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:51:41.954414  222763 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1126 20:51:41.967966  222763 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:51:41.980645  222763 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1126 20:51:41.992959  222763 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:51:41.996435  222763 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:51:42.005966  222763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:51:42.155984  222763 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:51:42.185341  222763 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586 for IP: 192.168.85.2
	I1126 20:51:42.185454  222763 certs.go:195] generating shared ca certs ...
	I1126 20:51:42.185493  222763 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:42.185787  222763 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:51:42.185889  222763 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:51:42.185941  222763 certs.go:257] generating profile certs ...
	I1126 20:51:42.186101  222763 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.key
	I1126 20:51:42.186251  222763 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key.319cfcc4
	I1126 20:51:42.186377  222763 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.key
	I1126 20:51:42.186571  222763 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:51:42.186661  222763 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:51:42.186694  222763 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:51:42.186766  222763 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:51:42.186834  222763 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:51:42.186904  222763 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:51:42.187049  222763 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:51:42.189966  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:51:42.286580  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:51:42.314719  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:51:42.338045  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:51:42.378193  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1126 20:51:42.406405  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:51:42.427072  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:51:42.454436  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:51:42.480530  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:51:42.503742  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:51:42.524854  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:51:42.544432  222763 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:51:42.559173  222763 ssh_runner.go:195] Run: openssl version
	I1126 20:51:42.565986  222763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:51:42.574245  222763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:51:42.578961  222763 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:51:42.579026  222763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:51:42.624963  222763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:51:42.633348  222763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:51:42.641610  222763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:51:42.645558  222763 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:51:42.645622  222763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:51:42.689675  222763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:51:42.698176  222763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:51:42.706754  222763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:51:42.710637  222763 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:51:42.710720  222763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:51:42.752281  222763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:51:42.760414  222763 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:51:42.764811  222763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:51:42.805556  222763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:51:42.846461  222763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:51:42.887359  222763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:51:42.942785  222763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:51:43.018637  222763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:51:43.096621  222763 kubeadm.go:401] StartCluster: {Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:51:43.096709  222763 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:51:43.096853  222763 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:51:43.146149  222763 cri.go:89] found id: "67eef4727303c576bec7a2a74593b3b7f69b8f03f8409449791388af32fcfd49"
	I1126 20:51:43.146171  222763 cri.go:89] found id: "05600c45da34a337d755436cad09d9486b2e6abad961eca949578950d2380066"
	I1126 20:51:43.146176  222763 cri.go:89] found id: "3cd6972a6b24c555ea5bbdbb3c406b047bbe66e5a18a1e7aa5fa534b38e02cb9"
	I1126 20:51:43.146180  222763 cri.go:89] found id: "68acb68b93b72cb9c251bab9f93e45d90bb80f9e5df2a4d9840dfa88465b5ad8"
	I1126 20:51:43.146185  222763 cri.go:89] found id: ""
	I1126 20:51:43.146251  222763 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:51:43.168945  222763 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:51:43Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:51:43.169080  222763 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:51:43.181582  222763 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:51:43.181658  222763 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:51:43.181735  222763 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:51:43.194505  222763 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:51:43.195135  222763 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-616586" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:51:43.195421  222763 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-2326/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-616586" cluster setting kubeconfig missing "embed-certs-616586" context setting]
	I1126 20:51:43.195936  222763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:43.197348  222763 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:51:43.209050  222763 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1126 20:51:43.209082  222763 kubeadm.go:602] duration metric: took 27.404369ms to restartPrimaryControlPlane
	I1126 20:51:43.209112  222763 kubeadm.go:403] duration metric: took 112.501603ms to StartCluster
	I1126 20:51:43.209147  222763 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:43.209223  222763 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:51:43.210537  222763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:43.210793  222763 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:51:43.211123  222763 config.go:182] Loaded profile config "embed-certs-616586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:51:43.211265  222763 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:51:43.211331  222763 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-616586"
	I1126 20:51:43.211358  222763 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-616586"
	W1126 20:51:43.211368  222763 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:51:43.211374  222763 addons.go:70] Setting dashboard=true in profile "embed-certs-616586"
	I1126 20:51:43.211393  222763 host.go:66] Checking if "embed-certs-616586" exists ...
	I1126 20:51:43.211401  222763 addons.go:239] Setting addon dashboard=true in "embed-certs-616586"
	W1126 20:51:43.211410  222763 addons.go:248] addon dashboard should already be in state true
	I1126 20:51:43.211437  222763 host.go:66] Checking if "embed-certs-616586" exists ...
	I1126 20:51:43.211863  222763 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:51:43.211871  222763 addons.go:70] Setting default-storageclass=true in profile "embed-certs-616586"
	I1126 20:51:43.211884  222763 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-616586"
	I1126 20:51:43.212170  222763 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:51:43.211863  222763 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:51:43.215339  222763 out.go:179] * Verifying Kubernetes components...
	I1126 20:51:43.228201  222763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:51:43.257313  222763 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:51:43.262081  222763 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:51:43.265456  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:51:43.265482  222763 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:51:43.265570  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:43.274882  222763 addons.go:239] Setting addon default-storageclass=true in "embed-certs-616586"
	W1126 20:51:43.274904  222763 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:51:43.274930  222763 host.go:66] Checking if "embed-certs-616586" exists ...
	I1126 20:51:43.275353  222763 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:51:43.277694  222763 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:51:43.289769  222763 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:51:43.289796  222763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:51:43.289864  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:43.313417  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:43.321825  222763 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:51:43.321845  222763 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:51:43.321903  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:43.333582  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:43.362964  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:43.543394  222763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:51:43.562400  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:51:43.562426  222763 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:51:43.596854  222763 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:51:43.640132  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:51:43.640162  222763 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:51:43.694753  222763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:51:43.706462  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:51:43.706485  222763 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:51:43.767031  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:51:43.767055  222763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:51:43.814482  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:51:43.814513  222763 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:51:43.885590  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:51:43.885610  222763 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:51:43.948829  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:51:43.948851  222763 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:51:43.975351  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:51:43.975377  222763 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:51:44.005955  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:51:44.005981  222763 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:51:44.023068  222763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1126 20:51:41.660338  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:43.660464  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	I1126 20:51:49.117821  222763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.574392719s)
	I1126 20:51:49.117915  222763 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.521036088s)
	I1126 20:51:49.117992  222763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.423210213s)
	I1126 20:51:49.118005  222763 node_ready.go:35] waiting up to 6m0s for node "embed-certs-616586" to be "Ready" ...
	I1126 20:51:49.199183  222763 node_ready.go:49] node "embed-certs-616586" is "Ready"
	I1126 20:51:49.199257  222763 node_ready.go:38] duration metric: took 81.198547ms for node "embed-certs-616586" to be "Ready" ...
	I1126 20:51:49.199284  222763 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:51:49.199367  222763 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:51:49.403591  222763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.380470789s)
	I1126 20:51:49.403764  222763 api_server.go:72] duration metric: took 6.192938469s to wait for apiserver process to appear ...
	I1126 20:51:49.403779  222763 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:51:49.403797  222763 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:51:49.406604  222763 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-616586 addons enable metrics-server
	
	I1126 20:51:49.409803  222763 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1126 20:51:49.412756  222763 addons.go:530] duration metric: took 6.201487608s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1126 20:51:49.416104  222763 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:51:49.416128  222763 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:51:49.904321  222763 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:51:49.913137  222763 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1126 20:51:49.914478  222763 api_server.go:141] control plane version: v1.34.1
	I1126 20:51:49.914533  222763 api_server.go:131] duration metric: took 510.74556ms to wait for apiserver health ...
	I1126 20:51:49.914556  222763 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:51:49.919418  222763 system_pods.go:59] 8 kube-system pods found
	I1126 20:51:49.919490  222763 system_pods.go:61] "coredns-66bc5c9577-lmmqs" [8b9cb74e-e5f6-413d-918a-66872e539adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:51:49.919519  222763 system_pods.go:61] "etcd-embed-certs-616586" [2379b064-da28-43a0-b71d-4a9803da3169] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:51:49.919565  222763 system_pods.go:61] "kindnet-5zbx9" [d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556] Running
	I1126 20:51:49.919591  222763 system_pods.go:61] "kube-apiserver-embed-certs-616586" [6e697b4a-2458-4ef6-8c72-8c8272b80d6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:51:49.919615  222763 system_pods.go:61] "kube-controller-manager-embed-certs-616586" [a0385efe-91d4-40ed-b76c-be281d7ed831] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:51:49.919635  222763 system_pods.go:61] "kube-proxy-g5vk4" [711e6b5c-eac4-4b0c-9a50-22ddb3b73c53] Running
	I1126 20:51:49.919668  222763 system_pods.go:61] "kube-scheduler-embed-certs-616586" [08620aaf-720f-4514-b73f-6eb433363368] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:51:49.919689  222763 system_pods.go:61] "storage-provisioner" [ceee294c-4db0-4dc0-888c-e3733a2592cb] Running
	I1126 20:51:49.919708  222763 system_pods.go:74] duration metric: took 5.132304ms to wait for pod list to return data ...
	I1126 20:51:49.919727  222763 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:51:49.922595  222763 default_sa.go:45] found service account: "default"
	I1126 20:51:49.922646  222763 default_sa.go:55] duration metric: took 2.900214ms for default service account to be created ...
	I1126 20:51:49.922671  222763 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:51:49.926448  222763 system_pods.go:86] 8 kube-system pods found
	I1126 20:51:49.926519  222763 system_pods.go:89] "coredns-66bc5c9577-lmmqs" [8b9cb74e-e5f6-413d-918a-66872e539adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:51:49.926544  222763 system_pods.go:89] "etcd-embed-certs-616586" [2379b064-da28-43a0-b71d-4a9803da3169] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:51:49.926584  222763 system_pods.go:89] "kindnet-5zbx9" [d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556] Running
	I1126 20:51:49.926612  222763 system_pods.go:89] "kube-apiserver-embed-certs-616586" [6e697b4a-2458-4ef6-8c72-8c8272b80d6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:51:49.926634  222763 system_pods.go:89] "kube-controller-manager-embed-certs-616586" [a0385efe-91d4-40ed-b76c-be281d7ed831] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:51:49.926656  222763 system_pods.go:89] "kube-proxy-g5vk4" [711e6b5c-eac4-4b0c-9a50-22ddb3b73c53] Running
	I1126 20:51:49.926692  222763 system_pods.go:89] "kube-scheduler-embed-certs-616586" [08620aaf-720f-4514-b73f-6eb433363368] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:51:49.926715  222763 system_pods.go:89] "storage-provisioner" [ceee294c-4db0-4dc0-888c-e3733a2592cb] Running
	I1126 20:51:49.926738  222763 system_pods.go:126] duration metric: took 4.048865ms to wait for k8s-apps to be running ...
	I1126 20:51:49.926760  222763 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:51:49.926843  222763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:51:49.941320  222763 system_svc.go:56] duration metric: took 14.551367ms WaitForService to wait for kubelet
	I1126 20:51:49.941348  222763 kubeadm.go:587] duration metric: took 6.730521541s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:51:49.941366  222763 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:51:49.947058  222763 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:51:49.947090  222763 node_conditions.go:123] node cpu capacity is 2
	I1126 20:51:49.947105  222763 node_conditions.go:105] duration metric: took 5.732508ms to run NodePressure ...
	I1126 20:51:49.947118  222763 start.go:242] waiting for startup goroutines ...
	I1126 20:51:49.947136  222763 start.go:247] waiting for cluster config update ...
	I1126 20:51:49.947154  222763 start.go:256] writing updated cluster config ...
	I1126 20:51:49.947451  222763 ssh_runner.go:195] Run: rm -f paused
	I1126 20:51:49.951680  222763 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:51:49.963754  222763 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lmmqs" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:51:46.160465  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:48.660829  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:51.970314  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:51:53.978775  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:51:50.662149  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:53.160039  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:55.160578  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:56.470317  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:51:58.979402  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:51:57.660176  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:59.660383  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:52:01.472090  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:03.969449  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:01.662018  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:52:04.160420  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:52:05.969891  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:08.469380  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:06.160629  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:52:08.660300  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:52:10.470624  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:12.971838  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:10.660909  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:52:13.160106  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	I1126 20:52:14.672338  219464 node_ready.go:49] node "default-k8s-diff-port-538119" is "Ready"
	I1126 20:52:14.672364  219464 node_ready.go:38] duration metric: took 39.515122789s for node "default-k8s-diff-port-538119" to be "Ready" ...
	I1126 20:52:14.672377  219464 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:52:14.672436  219464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:52:14.695280  219464 api_server.go:72] duration metric: took 40.695115535s to wait for apiserver process to appear ...
	I1126 20:52:14.695303  219464 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:52:14.695322  219464 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1126 20:52:14.703791  219464 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1126 20:52:14.705016  219464 api_server.go:141] control plane version: v1.34.1
	I1126 20:52:14.705048  219464 api_server.go:131] duration metric: took 9.738522ms to wait for apiserver health ...
	I1126 20:52:14.705057  219464 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:52:14.718581  219464 system_pods.go:59] 8 kube-system pods found
	I1126 20:52:14.718668  219464 system_pods.go:61] "coredns-66bc5c9577-whx45" [4c930cb6-3a88-453d-87b2-982b117252c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:52:14.718691  219464 system_pods.go:61] "etcd-default-k8s-diff-port-538119" [350b0a49-cb40-4e7e-979e-2603cd98f40a] Running
	I1126 20:52:14.718729  219464 system_pods.go:61] "kindnet-ts8sn" [689c63b4-0698-4849-b955-38da30ca9d27] Running
	I1126 20:52:14.718749  219464 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-538119" [1075acc3-91b8-413d-8236-1458b8b2f755] Running
	I1126 20:52:14.718768  219464 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-538119" [9b1dc77b-3053-45d5-9c72-f9f755941068] Running
	I1126 20:52:14.718791  219464 system_pods.go:61] "kube-proxy-sp5l4" [fe1ccf23-f465-4b93-b09e-c5a07258326f] Running
	I1126 20:52:14.718810  219464 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-538119" [641a56bf-2138-4b46-b797-b787b49f2505] Running
	I1126 20:52:14.718838  219464 system_pods.go:61] "storage-provisioner" [c2af4292-99c1-4828-a90f-f165d964345f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:52:14.718864  219464 system_pods.go:74] duration metric: took 13.800402ms to wait for pod list to return data ...
	I1126 20:52:14.718886  219464 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:52:14.728500  219464 default_sa.go:45] found service account: "default"
	I1126 20:52:14.728522  219464 default_sa.go:55] duration metric: took 9.61581ms for default service account to be created ...
	I1126 20:52:14.728532  219464 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:52:14.731523  219464 system_pods.go:86] 8 kube-system pods found
	I1126 20:52:14.731559  219464 system_pods.go:89] "coredns-66bc5c9577-whx45" [4c930cb6-3a88-453d-87b2-982b117252c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:52:14.731572  219464 system_pods.go:89] "etcd-default-k8s-diff-port-538119" [350b0a49-cb40-4e7e-979e-2603cd98f40a] Running
	I1126 20:52:14.731581  219464 system_pods.go:89] "kindnet-ts8sn" [689c63b4-0698-4849-b955-38da30ca9d27] Running
	I1126 20:52:14.731587  219464 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538119" [1075acc3-91b8-413d-8236-1458b8b2f755] Running
	I1126 20:52:14.731592  219464 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538119" [9b1dc77b-3053-45d5-9c72-f9f755941068] Running
	I1126 20:52:14.731600  219464 system_pods.go:89] "kube-proxy-sp5l4" [fe1ccf23-f465-4b93-b09e-c5a07258326f] Running
	I1126 20:52:14.731605  219464 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538119" [641a56bf-2138-4b46-b797-b787b49f2505] Running
	I1126 20:52:14.731615  219464 system_pods.go:89] "storage-provisioner" [c2af4292-99c1-4828-a90f-f165d964345f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:52:14.731637  219464 retry.go:31] will retry after 273.011473ms: missing components: kube-dns
	I1126 20:52:15.009512  219464 system_pods.go:86] 8 kube-system pods found
	I1126 20:52:15.009608  219464 system_pods.go:89] "coredns-66bc5c9577-whx45" [4c930cb6-3a88-453d-87b2-982b117252c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:52:15.009637  219464 system_pods.go:89] "etcd-default-k8s-diff-port-538119" [350b0a49-cb40-4e7e-979e-2603cd98f40a] Running
	I1126 20:52:15.009678  219464 system_pods.go:89] "kindnet-ts8sn" [689c63b4-0698-4849-b955-38da30ca9d27] Running
	I1126 20:52:15.009707  219464 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538119" [1075acc3-91b8-413d-8236-1458b8b2f755] Running
	I1126 20:52:15.009729  219464 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538119" [9b1dc77b-3053-45d5-9c72-f9f755941068] Running
	I1126 20:52:15.009750  219464 system_pods.go:89] "kube-proxy-sp5l4" [fe1ccf23-f465-4b93-b09e-c5a07258326f] Running
	I1126 20:52:15.009784  219464 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538119" [641a56bf-2138-4b46-b797-b787b49f2505] Running
	I1126 20:52:15.009815  219464 system_pods.go:89] "storage-provisioner" [c2af4292-99c1-4828-a90f-f165d964345f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:52:15.009848  219464 retry.go:31] will retry after 359.24819ms: missing components: kube-dns
	I1126 20:52:15.375376  219464 system_pods.go:86] 8 kube-system pods found
	I1126 20:52:15.375411  219464 system_pods.go:89] "coredns-66bc5c9577-whx45" [4c930cb6-3a88-453d-87b2-982b117252c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:52:15.375422  219464 system_pods.go:89] "etcd-default-k8s-diff-port-538119" [350b0a49-cb40-4e7e-979e-2603cd98f40a] Running
	I1126 20:52:15.375428  219464 system_pods.go:89] "kindnet-ts8sn" [689c63b4-0698-4849-b955-38da30ca9d27] Running
	I1126 20:52:15.375432  219464 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538119" [1075acc3-91b8-413d-8236-1458b8b2f755] Running
	I1126 20:52:15.375437  219464 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538119" [9b1dc77b-3053-45d5-9c72-f9f755941068] Running
	I1126 20:52:15.375441  219464 system_pods.go:89] "kube-proxy-sp5l4" [fe1ccf23-f465-4b93-b09e-c5a07258326f] Running
	I1126 20:52:15.375445  219464 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538119" [641a56bf-2138-4b46-b797-b787b49f2505] Running
	I1126 20:52:15.375451  219464 system_pods.go:89] "storage-provisioner" [c2af4292-99c1-4828-a90f-f165d964345f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:52:15.375471  219464 retry.go:31] will retry after 321.000099ms: missing components: kube-dns
	I1126 20:52:15.700475  219464 system_pods.go:86] 8 kube-system pods found
	I1126 20:52:15.700515  219464 system_pods.go:89] "coredns-66bc5c9577-whx45" [4c930cb6-3a88-453d-87b2-982b117252c1] Running
	I1126 20:52:15.700522  219464 system_pods.go:89] "etcd-default-k8s-diff-port-538119" [350b0a49-cb40-4e7e-979e-2603cd98f40a] Running
	I1126 20:52:15.700528  219464 system_pods.go:89] "kindnet-ts8sn" [689c63b4-0698-4849-b955-38da30ca9d27] Running
	I1126 20:52:15.700533  219464 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538119" [1075acc3-91b8-413d-8236-1458b8b2f755] Running
	I1126 20:52:15.700537  219464 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538119" [9b1dc77b-3053-45d5-9c72-f9f755941068] Running
	I1126 20:52:15.700541  219464 system_pods.go:89] "kube-proxy-sp5l4" [fe1ccf23-f465-4b93-b09e-c5a07258326f] Running
	I1126 20:52:15.700549  219464 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538119" [641a56bf-2138-4b46-b797-b787b49f2505] Running
	I1126 20:52:15.700557  219464 system_pods.go:89] "storage-provisioner" [c2af4292-99c1-4828-a90f-f165d964345f] Running
	I1126 20:52:15.700564  219464 system_pods.go:126] duration metric: took 972.026892ms to wait for k8s-apps to be running ...
	I1126 20:52:15.700571  219464 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:52:15.700627  219464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:52:15.713628  219464 system_svc.go:56] duration metric: took 13.046881ms WaitForService to wait for kubelet
	I1126 20:52:15.713660  219464 kubeadm.go:587] duration metric: took 41.713500378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:52:15.713681  219464 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:52:15.716632  219464 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:52:15.716666  219464 node_conditions.go:123] node cpu capacity is 2
	I1126 20:52:15.716680  219464 node_conditions.go:105] duration metric: took 2.993332ms to run NodePressure ...
	I1126 20:52:15.716693  219464 start.go:242] waiting for startup goroutines ...
	I1126 20:52:15.716709  219464 start.go:247] waiting for cluster config update ...
	I1126 20:52:15.716721  219464 start.go:256] writing updated cluster config ...
	I1126 20:52:15.717049  219464 ssh_runner.go:195] Run: rm -f paused
	I1126 20:52:15.720621  219464 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:52:15.725302  219464 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-whx45" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:15.730259  219464 pod_ready.go:94] pod "coredns-66bc5c9577-whx45" is "Ready"
	I1126 20:52:15.730285  219464 pod_ready.go:86] duration metric: took 4.955687ms for pod "coredns-66bc5c9577-whx45" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:15.732420  219464 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:15.737319  219464 pod_ready.go:94] pod "etcd-default-k8s-diff-port-538119" is "Ready"
	I1126 20:52:15.737346  219464 pod_ready.go:86] duration metric: took 4.900082ms for pod "etcd-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:15.740048  219464 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:15.745223  219464 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-538119" is "Ready"
	I1126 20:52:15.745248  219464 pod_ready.go:86] duration metric: took 5.174578ms for pod "kube-apiserver-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:15.747682  219464 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:16.125507  219464 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-538119" is "Ready"
	I1126 20:52:16.125540  219464 pod_ready.go:86] duration metric: took 377.827174ms for pod "kube-controller-manager-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:16.325821  219464 pod_ready.go:83] waiting for pod "kube-proxy-sp5l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:16.724803  219464 pod_ready.go:94] pod "kube-proxy-sp5l4" is "Ready"
	I1126 20:52:16.724839  219464 pod_ready.go:86] duration metric: took 398.994782ms for pod "kube-proxy-sp5l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:16.925552  219464 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:17.325707  219464 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-538119" is "Ready"
	I1126 20:52:17.325734  219464 pod_ready.go:86] duration metric: took 400.155489ms for pod "kube-scheduler-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:17.325748  219464 pod_ready.go:40] duration metric: took 1.605092601s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:52:17.387806  219464 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 20:52:17.390990  219464 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-538119" cluster and "default" namespace by default
	W1126 20:52:15.469635  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:17.478306  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:19.969433  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:21.974474  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	I1126 20:52:23.469596  222763 pod_ready.go:94] pod "coredns-66bc5c9577-lmmqs" is "Ready"
	I1126 20:52:23.469625  222763 pod_ready.go:86] duration metric: took 33.505842541s for pod "coredns-66bc5c9577-lmmqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.472554  222763 pod_ready.go:83] waiting for pod "etcd-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.477444  222763 pod_ready.go:94] pod "etcd-embed-certs-616586" is "Ready"
	I1126 20:52:23.477477  222763 pod_ready.go:86] duration metric: took 4.895759ms for pod "etcd-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.480057  222763 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.489102  222763 pod_ready.go:94] pod "kube-apiserver-embed-certs-616586" is "Ready"
	I1126 20:52:23.489125  222763 pod_ready.go:86] duration metric: took 9.045768ms for pod "kube-apiserver-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.492057  222763 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.667618  222763 pod_ready.go:94] pod "kube-controller-manager-embed-certs-616586" is "Ready"
	I1126 20:52:23.667648  222763 pod_ready.go:86] duration metric: took 175.562166ms for pod "kube-controller-manager-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.867881  222763 pod_ready.go:83] waiting for pod "kube-proxy-g5vk4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:24.267216  222763 pod_ready.go:94] pod "kube-proxy-g5vk4" is "Ready"
	I1126 20:52:24.267244  222763 pod_ready.go:86] duration metric: took 399.333009ms for pod "kube-proxy-g5vk4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:24.467299  222763 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:24.867500  222763 pod_ready.go:94] pod "kube-scheduler-embed-certs-616586" is "Ready"
	I1126 20:52:24.867525  222763 pod_ready.go:86] duration metric: took 400.196197ms for pod "kube-scheduler-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:24.867536  222763 pod_ready.go:40] duration metric: took 34.915821928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:52:24.926575  222763 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 20:52:24.929672  222763 out.go:179] * Done! kubectl is now configured to use "embed-certs-616586" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 20:52:15 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:15.036687651Z" level=info msg="Created container 13c4d4c966fe1ee02564346136358af1ca512cb6e58f4c322ff610b1bd2a1c70: kube-system/coredns-66bc5c9577-whx45/coredns" id=777bf3f2-d9fc-4e82-b334-c8b87e1af70e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:52:15 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:15.037798684Z" level=info msg="Starting container: 13c4d4c966fe1ee02564346136358af1ca512cb6e58f4c322ff610b1bd2a1c70" id=b25eced2-a709-4e57-9f32-6b9711bdfec7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:52:15 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:15.039912952Z" level=info msg="Started container" PID=1736 containerID=13c4d4c966fe1ee02564346136358af1ca512cb6e58f4c322ff610b1bd2a1c70 description=kube-system/coredns-66bc5c9577-whx45/coredns id=b25eced2-a709-4e57-9f32-6b9711bdfec7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=53299df4e290c1c2c6ae0c541ddaf5a5c398dfff5145d73d2fbcc71c7c852816
	Nov 26 20:52:17 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:17.973127147Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d8190894-0a33-45a4-a2fb-1c8e65425d53 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:52:17 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:17.97320534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:52:17 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:17.986137435Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c1050127a6a17ea3ca97a8e962e5ddffcead3e4d65b97e94d9871808e3360a50 UID:e45ab641-595b-4250-9bca-f10dee6cbe16 NetNS:/var/run/netns/a236a510-ff98-4b5c-8033-277d1d59f8ab Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004be648}] Aliases:map[]}"
	Nov 26 20:52:17 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:17.986177155Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 26 20:52:17 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:17.994513692Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c1050127a6a17ea3ca97a8e962e5ddffcead3e4d65b97e94d9871808e3360a50 UID:e45ab641-595b-4250-9bca-f10dee6cbe16 NetNS:/var/run/netns/a236a510-ff98-4b5c-8033-277d1d59f8ab Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004be648}] Aliases:map[]}"
	Nov 26 20:52:17 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:17.994660452Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 26 20:52:17 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:17.997853065Z" level=info msg="Ran pod sandbox c1050127a6a17ea3ca97a8e962e5ddffcead3e4d65b97e94d9871808e3360a50 with infra container: default/busybox/POD" id=d8190894-0a33-45a4-a2fb-1c8e65425d53 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:52:17 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:17.999014131Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e54ede18-5f26-4ff7-8b3c-35f543286aea name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:52:17 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:17.999129532Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e54ede18-5f26-4ff7-8b3c-35f543286aea name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:52:17 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:17.999183553Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e54ede18-5f26-4ff7-8b3c-35f543286aea name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:52:18 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:18.001571545Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=18334495-d9eb-44df-851f-3996d6f84f8c name=/runtime.v1.ImageService/PullImage
	Nov 26 20:52:18 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:18.004461119Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 26 20:52:20 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:20.092583709Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=18334495-d9eb-44df-851f-3996d6f84f8c name=/runtime.v1.ImageService/PullImage
	Nov 26 20:52:20 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:20.093623678Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b4010ac7-d007-41ed-a72e-1a55dd12dd27 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:52:20 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:20.095485679Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=614d7ce0-0bb6-42f6-a5f2-ceb47ee20898 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:52:20 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:20.102988846Z" level=info msg="Creating container: default/busybox/busybox" id=d73de585-f025-4084-b19d-7db21a2517cc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:52:20 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:20.103099997Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:52:20 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:20.108162939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:52:20 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:20.10886225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:52:20 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:20.124897525Z" level=info msg="Created container 117fe3bc6ef06096c2198d9e244149b0ad8a8d67378b89c8eaa694c7775e7ded: default/busybox/busybox" id=d73de585-f025-4084-b19d-7db21a2517cc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:52:20 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:20.126112456Z" level=info msg="Starting container: 117fe3bc6ef06096c2198d9e244149b0ad8a8d67378b89c8eaa694c7775e7ded" id=bb865140-5386-4768-aca8-29ca37a5c479 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:52:20 default-k8s-diff-port-538119 crio[833]: time="2025-11-26T20:52:20.128121645Z" level=info msg="Started container" PID=1795 containerID=117fe3bc6ef06096c2198d9e244149b0ad8a8d67378b89c8eaa694c7775e7ded description=default/busybox/busybox id=bb865140-5386-4768-aca8-29ca37a5c479 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c1050127a6a17ea3ca97a8e962e5ddffcead3e4d65b97e94d9871808e3360a50
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	117fe3bc6ef06       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   c1050127a6a17       busybox                                                default
	13c4d4c966fe1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   53299df4e290c       coredns-66bc5c9577-whx45                               kube-system
	cd987bb17072a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   89cfb4a72bbe7       storage-provisioner                                    kube-system
	576cb49ff94e8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   cc37299b5504f       kindnet-ts8sn                                          kube-system
	1afaf355721cc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   057d2ee1808db       kube-proxy-sp5l4                                       kube-system
	e6126ef30c001       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   aa3f7dd9c7254       kube-scheduler-default-k8s-diff-port-538119            kube-system
	4f8e54cda27ec       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   973feb7416a60       kube-apiserver-default-k8s-diff-port-538119            kube-system
	24770b69ba94c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   c2f4513ac5add       kube-controller-manager-default-k8s-diff-port-538119   kube-system
	a36e7c2130812       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   6eb0f73fb7eb0       etcd-default-k8s-diff-port-538119                      kube-system
	
	
	==> coredns [13c4d4c966fe1ee02564346136358af1ca512cb6e58f4c322ff610b1bd2a1c70] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53334 - 46516 "HINFO IN 9199392387975149386.6197314121831971203. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016263051s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-538119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-538119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=default-k8s-diff-port-538119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_51_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:51:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-538119
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:52:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:52:14 +0000   Wed, 26 Nov 2025 20:51:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:52:14 +0000   Wed, 26 Nov 2025 20:51:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:52:14 +0000   Wed, 26 Nov 2025 20:51:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:52:14 +0000   Wed, 26 Nov 2025 20:52:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-538119
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                6b16bd81-d69e-4bbf-af91-d5d3d851d05d
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-whx45                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-538119                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-ts8sn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-538119             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-538119    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-sp5l4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-538119             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-538119 event: Registered Node default-k8s-diff-port-538119 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-538119 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov26 20:25] overlayfs: idmapped layers are currently not supported
	[Nov26 20:27] overlayfs: idmapped layers are currently not supported
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	[Nov26 20:48] overlayfs: idmapped layers are currently not supported
	[Nov26 20:49] overlayfs: idmapped layers are currently not supported
	[Nov26 20:50] overlayfs: idmapped layers are currently not supported
	[Nov26 20:51] overlayfs: idmapped layers are currently not supported
	[ +24.066506] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a36e7c2130812172caeccb44bc6c932c9a8d3f612d37ef9ef89add60fea71276] <==
	{"level":"warn","ts":"2025-11-26T20:51:23.394220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.434085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.462880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.503626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.516387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.554088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.582402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.611428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.640355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.742912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.760858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.815529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.848553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.912797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.937068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:23.993081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:24.060212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:24.109233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:24.155234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:24.182533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:24.201797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:24.259020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:24.276820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:24.306824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:24.396101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55112","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:52:27 up  1:34,  0 user,  load average: 3.11, 3.19, 2.59
	Linux default-k8s-diff-port-538119 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [576cb49ff94e83eea1002c4306436c336d6cc5c0b0771d2634c49d7c96eeb588] <==
	I1126 20:51:34.026405       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:51:34.026641       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:51:34.026751       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:51:34.026763       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:51:34.026773       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:51:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:51:34.151403       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:51:34.151428       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:51:34.151437       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:51:34.151554       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:52:04.152503       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 20:52:04.227072       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:52:04.227072       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:52:04.228331       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1126 20:52:05.651973       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:52:05.652000       1 metrics.go:72] Registering metrics
	I1126 20:52:05.652067       1 controller.go:711] "Syncing nftables rules"
	I1126 20:52:14.151794       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:52:14.151851       1 main.go:301] handling current node
	I1126 20:52:24.151349       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:52:24.151383       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4f8e54cda27ec1898b1fe60c45df718a08585b5e917d4b1728a83295a419bc9d] <==
	E1126 20:51:25.511982       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1126 20:51:25.564363       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:51:25.576261       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:51:25.576368       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1126 20:51:25.597910       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:51:25.604110       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:51:25.723246       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1126 20:51:26.246045       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1126 20:51:26.253557       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1126 20:51:26.253643       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:51:26.985269       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:51:27.045274       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:51:27.177461       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1126 20:51:27.184936       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1126 20:51:27.186120       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:51:27.191336       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:51:27.404460       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:51:28.279145       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:51:28.293427       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1126 20:51:28.305625       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:51:32.747449       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:51:33.198143       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:51:33.203376       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:51:33.400268       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1126 20:52:25.779131       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:44456: use of closed network connection
	
	
	==> kube-controller-manager [24770b69ba94c0fc6d31671364c0fe7f51df04c341c8be5e1cfc4faa40e86017] <==
	I1126 20:51:32.440032       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1126 20:51:32.440137       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:51:32.440287       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 20:51:32.440117       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1126 20:51:32.441145       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:51:32.441233       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 20:51:32.441287       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:51:32.441488       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:51:32.441744       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:51:32.441779       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:51:32.441838       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:51:32.441914       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-538119"
	I1126 20:51:32.441987       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1126 20:51:32.442034       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:51:32.442656       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1126 20:51:32.445481       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1126 20:51:32.450166       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1126 20:51:32.451539       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1126 20:51:32.451647       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1126 20:51:32.452743       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1126 20:51:32.453863       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1126 20:51:32.454495       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 20:51:32.464267       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:51:32.464336       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:52:17.449080       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1afaf355721cc5e57631c19becf4cf80b9d071eb9f54d411aeb9759329591d79] <==
	I1126 20:51:33.869645       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:51:33.951563       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:51:34.053517       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:51:34.053554       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1126 20:51:34.053624       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:51:34.177584       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:51:34.177638       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:51:34.218206       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:51:34.218502       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:51:34.218514       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:51:34.219441       1 config.go:200] "Starting service config controller"
	I1126 20:51:34.219453       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:51:34.219635       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:51:34.219642       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:51:34.219654       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:51:34.219658       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:51:34.220275       1 config.go:309] "Starting node config controller"
	I1126 20:51:34.220284       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:51:34.220297       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:51:34.323486       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:51:34.335786       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:51:34.347204       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e6126ef30c001078b13ee0d9e8b73673d8db2a7257400fe42e19d43d10ba8bd8] <==
	E1126 20:51:25.545443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:51:25.545490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:51:25.545535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:51:25.545576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 20:51:25.545637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 20:51:25.545702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 20:51:25.545747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 20:51:25.545790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 20:51:25.545838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 20:51:25.546067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 20:51:25.546122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:51:25.546167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 20:51:25.546266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1126 20:51:25.546291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 20:51:26.377796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:51:26.401815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 20:51:26.428079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:51:26.502431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 20:51:26.502514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1126 20:51:26.550670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 20:51:26.582717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:51:26.610604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 20:51:26.628617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:51:26.668222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1126 20:51:27.136715       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:51:32 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:51:32.450601    1297 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 26 20:51:33 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:51:33.516312    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fe1ccf23-f465-4b93-b09e-c5a07258326f-kube-proxy\") pod \"kube-proxy-sp5l4\" (UID: \"fe1ccf23-f465-4b93-b09e-c5a07258326f\") " pod="kube-system/kube-proxy-sp5l4"
	Nov 26 20:51:33 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:51:33.516447    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe1ccf23-f465-4b93-b09e-c5a07258326f-lib-modules\") pod \"kube-proxy-sp5l4\" (UID: \"fe1ccf23-f465-4b93-b09e-c5a07258326f\") " pod="kube-system/kube-proxy-sp5l4"
	Nov 26 20:51:33 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:51:33.516474    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4sq\" (UniqueName: \"kubernetes.io/projected/fe1ccf23-f465-4b93-b09e-c5a07258326f-kube-api-access-6x4sq\") pod \"kube-proxy-sp5l4\" (UID: \"fe1ccf23-f465-4b93-b09e-c5a07258326f\") " pod="kube-system/kube-proxy-sp5l4"
	Nov 26 20:51:33 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:51:33.516537    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe1ccf23-f465-4b93-b09e-c5a07258326f-xtables-lock\") pod \"kube-proxy-sp5l4\" (UID: \"fe1ccf23-f465-4b93-b09e-c5a07258326f\") " pod="kube-system/kube-proxy-sp5l4"
	Nov 26 20:51:33 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:51:33.516557    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/689c63b4-0698-4849-b955-38da30ca9d27-cni-cfg\") pod \"kindnet-ts8sn\" (UID: \"689c63b4-0698-4849-b955-38da30ca9d27\") " pod="kube-system/kindnet-ts8sn"
	Nov 26 20:51:33 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:51:33.516620    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/689c63b4-0698-4849-b955-38da30ca9d27-xtables-lock\") pod \"kindnet-ts8sn\" (UID: \"689c63b4-0698-4849-b955-38da30ca9d27\") " pod="kube-system/kindnet-ts8sn"
	Nov 26 20:51:33 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:51:33.516679    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gd7k\" (UniqueName: \"kubernetes.io/projected/689c63b4-0698-4849-b955-38da30ca9d27-kube-api-access-7gd7k\") pod \"kindnet-ts8sn\" (UID: \"689c63b4-0698-4849-b955-38da30ca9d27\") " pod="kube-system/kindnet-ts8sn"
	Nov 26 20:51:33 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:51:33.516702    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/689c63b4-0698-4849-b955-38da30ca9d27-lib-modules\") pod \"kindnet-ts8sn\" (UID: \"689c63b4-0698-4849-b955-38da30ca9d27\") " pod="kube-system/kindnet-ts8sn"
	Nov 26 20:51:33 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:51:33.644267    1297 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 26 20:51:33 default-k8s-diff-port-538119 kubelet[1297]: W1126 20:51:33.775825    1297 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/crio-cc37299b5504fb8312acd5e44ec4c9d3525facd12be2090f696ac57c926bcb89 WatchSource:0}: Error finding container cc37299b5504fb8312acd5e44ec4c9d3525facd12be2090f696ac57c926bcb89: Status 404 returned error can't find the container with id cc37299b5504fb8312acd5e44ec4c9d3525facd12be2090f696ac57c926bcb89
	Nov 26 20:51:34 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:51:34.380927    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ts8sn" podStartSLOduration=1.380908126 podStartE2EDuration="1.380908126s" podCreationTimestamp="2025-11-26 20:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:51:34.320503734 +0000 UTC m=+6.269612642" watchObservedRunningTime="2025-11-26 20:51:34.380908126 +0000 UTC m=+6.330017026"
	Nov 26 20:51:35 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:51:35.588771    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sp5l4" podStartSLOduration=2.588751941 podStartE2EDuration="2.588751941s" podCreationTimestamp="2025-11-26 20:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:51:34.416167563 +0000 UTC m=+6.365276471" watchObservedRunningTime="2025-11-26 20:51:35.588751941 +0000 UTC m=+7.537860833"
	Nov 26 20:52:14 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:52:14.594661    1297 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 26 20:52:14 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:52:14.765911    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx2bj\" (UniqueName: \"kubernetes.io/projected/4c930cb6-3a88-453d-87b2-982b117252c1-kube-api-access-vx2bj\") pod \"coredns-66bc5c9577-whx45\" (UID: \"4c930cb6-3a88-453d-87b2-982b117252c1\") " pod="kube-system/coredns-66bc5c9577-whx45"
	Nov 26 20:52:14 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:52:14.765988    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c2af4292-99c1-4828-a90f-f165d964345f-tmp\") pod \"storage-provisioner\" (UID: \"c2af4292-99c1-4828-a90f-f165d964345f\") " pod="kube-system/storage-provisioner"
	Nov 26 20:52:14 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:52:14.766010    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cljkh\" (UniqueName: \"kubernetes.io/projected/c2af4292-99c1-4828-a90f-f165d964345f-kube-api-access-cljkh\") pod \"storage-provisioner\" (UID: \"c2af4292-99c1-4828-a90f-f165d964345f\") " pod="kube-system/storage-provisioner"
	Nov 26 20:52:14 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:52:14.766031    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c930cb6-3a88-453d-87b2-982b117252c1-config-volume\") pod \"coredns-66bc5c9577-whx45\" (UID: \"4c930cb6-3a88-453d-87b2-982b117252c1\") " pod="kube-system/coredns-66bc5c9577-whx45"
	Nov 26 20:52:14 default-k8s-diff-port-538119 kubelet[1297]: W1126 20:52:14.940349    1297 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/crio-89cfb4a72bbe7bf08393b29ef60e5b545c209a27268526d4fd4b595d22d0fbb8 WatchSource:0}: Error finding container 89cfb4a72bbe7bf08393b29ef60e5b545c209a27268526d4fd4b595d22d0fbb8: Status 404 returned error can't find the container with id 89cfb4a72bbe7bf08393b29ef60e5b545c209a27268526d4fd4b595d22d0fbb8
	Nov 26 20:52:14 default-k8s-diff-port-538119 kubelet[1297]: W1126 20:52:14.991481    1297 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/crio-53299df4e290c1c2c6ae0c541ddaf5a5c398dfff5145d73d2fbcc71c7c852816 WatchSource:0}: Error finding container 53299df4e290c1c2c6ae0c541ddaf5a5c398dfff5145d73d2fbcc71c7c852816: Status 404 returned error can't find the container with id 53299df4e290c1c2c6ae0c541ddaf5a5c398dfff5145d73d2fbcc71c7c852816
	Nov 26 20:52:15 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:52:15.399477    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-whx45" podStartSLOduration=42.399458361 podStartE2EDuration="42.399458361s" podCreationTimestamp="2025-11-26 20:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:52:15.379613434 +0000 UTC m=+47.328722334" watchObservedRunningTime="2025-11-26 20:52:15.399458361 +0000 UTC m=+47.348567269"
	Nov 26 20:52:17 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:52:17.661085    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.661064599 podStartE2EDuration="42.661064599s" podCreationTimestamp="2025-11-26 20:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:52:15.422671165 +0000 UTC m=+47.371780082" watchObservedRunningTime="2025-11-26 20:52:17.661064599 +0000 UTC m=+49.610173490"
	Nov 26 20:52:17 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:52:17.790566    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpb4n\" (UniqueName: \"kubernetes.io/projected/e45ab641-595b-4250-9bca-f10dee6cbe16-kube-api-access-mpb4n\") pod \"busybox\" (UID: \"e45ab641-595b-4250-9bca-f10dee6cbe16\") " pod="default/busybox"
	Nov 26 20:52:17 default-k8s-diff-port-538119 kubelet[1297]: W1126 20:52:17.996282    1297 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/crio-c1050127a6a17ea3ca97a8e962e5ddffcead3e4d65b97e94d9871808e3360a50 WatchSource:0}: Error finding container c1050127a6a17ea3ca97a8e962e5ddffcead3e4d65b97e94d9871808e3360a50: Status 404 returned error can't find the container with id c1050127a6a17ea3ca97a8e962e5ddffcead3e4d65b97e94d9871808e3360a50
	Nov 26 20:52:20 default-k8s-diff-port-538119 kubelet[1297]: I1126 20:52:20.398022    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.302705288 podStartE2EDuration="3.398004999s" podCreationTimestamp="2025-11-26 20:52:17 +0000 UTC" firstStartedPulling="2025-11-26 20:52:17.999493066 +0000 UTC m=+49.948601966" lastFinishedPulling="2025-11-26 20:52:20.094792777 +0000 UTC m=+52.043901677" observedRunningTime="2025-11-26 20:52:20.397755981 +0000 UTC m=+52.346864881" watchObservedRunningTime="2025-11-26 20:52:20.398004999 +0000 UTC m=+52.347113899"
	
	
	==> storage-provisioner [cd987bb17072ac9603f1f3e65dfc40057861cf797ce3d07ed2d5bdadf5155ef5] <==
	I1126 20:52:15.029578       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:52:15.063449       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:52:15.063500       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:52:15.110199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:15.120258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:52:15.120509       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:52:15.120769       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-538119_b29b50e5-899f-45b2-bf39-caef02afdaae!
	I1126 20:52:15.123514       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"73b04fd0-3ce6-4808-aac2-0c1574a9d61f", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-538119_b29b50e5-899f-45b2-bf39-caef02afdaae became leader
	W1126 20:52:15.123726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:15.133527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:52:15.222576       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-538119_b29b50e5-899f-45b2-bf39-caef02afdaae!
	W1126 20:52:17.137388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:17.141971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:19.144791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:19.152213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:21.155584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:21.162509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:23.165608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:23.170012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:25.173835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:25.183770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:27.189187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:27.199134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-538119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-616586 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-616586 --alsologtostderr -v=1: exit status 80 (2.483929632s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-616586 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:52:36.723989  225766 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:52:36.724186  225766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:52:36.724212  225766 out.go:374] Setting ErrFile to fd 2...
	I1126 20:52:36.724232  225766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:52:36.724614  225766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:52:36.725350  225766 out.go:368] Setting JSON to false
	I1126 20:52:36.725382  225766 mustload.go:66] Loading cluster: embed-certs-616586
	I1126 20:52:36.725864  225766 config.go:182] Loaded profile config "embed-certs-616586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:52:36.726372  225766 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:52:36.743981  225766 host.go:66] Checking if "embed-certs-616586" exists ...
	I1126 20:52:36.744320  225766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:52:36.798361  225766 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-26 20:52:36.788974058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:52:36.799042  225766 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-616586 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1126 20:52:36.804563  225766 out.go:179] * Pausing node embed-certs-616586 ... 
	I1126 20:52:36.807478  225766 host.go:66] Checking if "embed-certs-616586" exists ...
	I1126 20:52:36.807909  225766 ssh_runner.go:195] Run: systemctl --version
	I1126 20:52:36.808727  225766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:52:36.827020  225766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:52:36.932509  225766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:52:36.948079  225766 pause.go:52] kubelet running: true
	I1126 20:52:36.948147  225766 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:52:37.229194  225766 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:52:37.229274  225766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:52:37.302843  225766 cri.go:89] found id: "f678d3447e490743b2ee0d2e868f230525b963a8a7eda39a7562f91729595a9b"
	I1126 20:52:37.302865  225766 cri.go:89] found id: "05caffd7fc383997b08372234b64425e04fa2dbf03830dbf95855408fa9b65c0"
	I1126 20:52:37.302871  225766 cri.go:89] found id: "079110bf8f15d397c0fdba7593f783a31a000fcd6b92de2b4477a09731aab5bb"
	I1126 20:52:37.302874  225766 cri.go:89] found id: "ff0908c8190d668949024b3a2d898917d6596966a0f2c2198d6de6d5c823461b"
	I1126 20:52:37.302878  225766 cri.go:89] found id: "ebaf108a1d8ad6369fcdb2bd0e441964826b9647f9e876db927e0728e70f0a7c"
	I1126 20:52:37.302899  225766 cri.go:89] found id: "67eef4727303c576bec7a2a74593b3b7f69b8f03f8409449791388af32fcfd49"
	I1126 20:52:37.302904  225766 cri.go:89] found id: "05600c45da34a337d755436cad09d9486b2e6abad961eca949578950d2380066"
	I1126 20:52:37.302907  225766 cri.go:89] found id: "3cd6972a6b24c555ea5bbdbb3c406b047bbe66e5a18a1e7aa5fa534b38e02cb9"
	I1126 20:52:37.302911  225766 cri.go:89] found id: "68acb68b93b72cb9c251bab9f93e45d90bb80f9e5df2a4d9840dfa88465b5ad8"
	I1126 20:52:37.302918  225766 cri.go:89] found id: "b242c58cd6a92f6ae5ee1f8d498bfe274cd49d08c6f8e168776f53723a9db999"
	I1126 20:52:37.302924  225766 cri.go:89] found id: "cc0cb0d7adecab0e806790ede0bafa00cebde36ff2976b7770c516f4f5ebb8c0"
	I1126 20:52:37.302926  225766 cri.go:89] found id: ""
	I1126 20:52:37.302974  225766 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:52:37.314439  225766 retry.go:31] will retry after 247.132412ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:52:37Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:52:37.561859  225766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:52:37.574436  225766 pause.go:52] kubelet running: false
	I1126 20:52:37.574513  225766 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:52:37.734386  225766 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:52:37.734475  225766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:52:37.799042  225766 cri.go:89] found id: "f678d3447e490743b2ee0d2e868f230525b963a8a7eda39a7562f91729595a9b"
	I1126 20:52:37.799064  225766 cri.go:89] found id: "05caffd7fc383997b08372234b64425e04fa2dbf03830dbf95855408fa9b65c0"
	I1126 20:52:37.799069  225766 cri.go:89] found id: "079110bf8f15d397c0fdba7593f783a31a000fcd6b92de2b4477a09731aab5bb"
	I1126 20:52:37.799072  225766 cri.go:89] found id: "ff0908c8190d668949024b3a2d898917d6596966a0f2c2198d6de6d5c823461b"
	I1126 20:52:37.799076  225766 cri.go:89] found id: "ebaf108a1d8ad6369fcdb2bd0e441964826b9647f9e876db927e0728e70f0a7c"
	I1126 20:52:37.799080  225766 cri.go:89] found id: "67eef4727303c576bec7a2a74593b3b7f69b8f03f8409449791388af32fcfd49"
	I1126 20:52:37.799083  225766 cri.go:89] found id: "05600c45da34a337d755436cad09d9486b2e6abad961eca949578950d2380066"
	I1126 20:52:37.799086  225766 cri.go:89] found id: "3cd6972a6b24c555ea5bbdbb3c406b047bbe66e5a18a1e7aa5fa534b38e02cb9"
	I1126 20:52:37.799089  225766 cri.go:89] found id: "68acb68b93b72cb9c251bab9f93e45d90bb80f9e5df2a4d9840dfa88465b5ad8"
	I1126 20:52:37.799099  225766 cri.go:89] found id: "b242c58cd6a92f6ae5ee1f8d498bfe274cd49d08c6f8e168776f53723a9db999"
	I1126 20:52:37.799103  225766 cri.go:89] found id: "cc0cb0d7adecab0e806790ede0bafa00cebde36ff2976b7770c516f4f5ebb8c0"
	I1126 20:52:37.799106  225766 cri.go:89] found id: ""
	I1126 20:52:37.799157  225766 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:52:37.810640  225766 retry.go:31] will retry after 318.496526ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:52:37Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:52:38.130297  225766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:52:38.143042  225766 pause.go:52] kubelet running: false
	I1126 20:52:38.143114  225766 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:52:38.335550  225766 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:52:38.335706  225766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:52:38.400395  225766 cri.go:89] found id: "f678d3447e490743b2ee0d2e868f230525b963a8a7eda39a7562f91729595a9b"
	I1126 20:52:38.400420  225766 cri.go:89] found id: "05caffd7fc383997b08372234b64425e04fa2dbf03830dbf95855408fa9b65c0"
	I1126 20:52:38.400425  225766 cri.go:89] found id: "079110bf8f15d397c0fdba7593f783a31a000fcd6b92de2b4477a09731aab5bb"
	I1126 20:52:38.400429  225766 cri.go:89] found id: "ff0908c8190d668949024b3a2d898917d6596966a0f2c2198d6de6d5c823461b"
	I1126 20:52:38.400433  225766 cri.go:89] found id: "ebaf108a1d8ad6369fcdb2bd0e441964826b9647f9e876db927e0728e70f0a7c"
	I1126 20:52:38.400436  225766 cri.go:89] found id: "67eef4727303c576bec7a2a74593b3b7f69b8f03f8409449791388af32fcfd49"
	I1126 20:52:38.400440  225766 cri.go:89] found id: "05600c45da34a337d755436cad09d9486b2e6abad961eca949578950d2380066"
	I1126 20:52:38.400443  225766 cri.go:89] found id: "3cd6972a6b24c555ea5bbdbb3c406b047bbe66e5a18a1e7aa5fa534b38e02cb9"
	I1126 20:52:38.400446  225766 cri.go:89] found id: "68acb68b93b72cb9c251bab9f93e45d90bb80f9e5df2a4d9840dfa88465b5ad8"
	I1126 20:52:38.400452  225766 cri.go:89] found id: "b242c58cd6a92f6ae5ee1f8d498bfe274cd49d08c6f8e168776f53723a9db999"
	I1126 20:52:38.400455  225766 cri.go:89] found id: "cc0cb0d7adecab0e806790ede0bafa00cebde36ff2976b7770c516f4f5ebb8c0"
	I1126 20:52:38.400458  225766 cri.go:89] found id: ""
	I1126 20:52:38.400509  225766 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:52:38.412075  225766 retry.go:31] will retry after 466.301389ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:52:38Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:52:38.879373  225766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:52:38.892050  225766 pause.go:52] kubelet running: false
	I1126 20:52:38.892115  225766 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:52:39.069302  225766 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:52:39.069375  225766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:52:39.132137  225766 cri.go:89] found id: "f678d3447e490743b2ee0d2e868f230525b963a8a7eda39a7562f91729595a9b"
	I1126 20:52:39.132202  225766 cri.go:89] found id: "05caffd7fc383997b08372234b64425e04fa2dbf03830dbf95855408fa9b65c0"
	I1126 20:52:39.132214  225766 cri.go:89] found id: "079110bf8f15d397c0fdba7593f783a31a000fcd6b92de2b4477a09731aab5bb"
	I1126 20:52:39.132219  225766 cri.go:89] found id: "ff0908c8190d668949024b3a2d898917d6596966a0f2c2198d6de6d5c823461b"
	I1126 20:52:39.132223  225766 cri.go:89] found id: "ebaf108a1d8ad6369fcdb2bd0e441964826b9647f9e876db927e0728e70f0a7c"
	I1126 20:52:39.132227  225766 cri.go:89] found id: "67eef4727303c576bec7a2a74593b3b7f69b8f03f8409449791388af32fcfd49"
	I1126 20:52:39.132231  225766 cri.go:89] found id: "05600c45da34a337d755436cad09d9486b2e6abad961eca949578950d2380066"
	I1126 20:52:39.132234  225766 cri.go:89] found id: "3cd6972a6b24c555ea5bbdbb3c406b047bbe66e5a18a1e7aa5fa534b38e02cb9"
	I1126 20:52:39.132237  225766 cri.go:89] found id: "68acb68b93b72cb9c251bab9f93e45d90bb80f9e5df2a4d9840dfa88465b5ad8"
	I1126 20:52:39.132243  225766 cri.go:89] found id: "b242c58cd6a92f6ae5ee1f8d498bfe274cd49d08c6f8e168776f53723a9db999"
	I1126 20:52:39.132246  225766 cri.go:89] found id: "cc0cb0d7adecab0e806790ede0bafa00cebde36ff2976b7770c516f4f5ebb8c0"
	I1126 20:52:39.132249  225766 cri.go:89] found id: ""
	I1126 20:52:39.132316  225766 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:52:39.147038  225766 out.go:203] 
	W1126 20:52:39.149948  225766 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:52:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:52:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 20:52:39.149967  225766 out.go:285] * 
	* 
	W1126 20:52:39.155883  225766 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 20:52:39.158913  225766 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-616586 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-616586
helpers_test.go:243: (dbg) docker inspect embed-certs-616586:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d",
	        "Created": "2025-11-26T20:49:51.803939719Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 222890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:51:35.407395004Z",
	            "FinishedAt": "2025-11-26T20:51:34.290370504Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/hosts",
	        "LogPath": "/var/lib/docker/containers/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d-json.log",
	        "Name": "/embed-certs-616586",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-616586:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-616586",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d",
	                "LowerDir": "/var/lib/docker/overlay2/ee40ec00c8e4f4c52d4005a57d1bc8fa1807a5f08ea65960ca2b855ee1aee036-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ee40ec00c8e4f4c52d4005a57d1bc8fa1807a5f08ea65960ca2b855ee1aee036/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ee40ec00c8e4f4c52d4005a57d1bc8fa1807a5f08ea65960ca2b855ee1aee036/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ee40ec00c8e4f4c52d4005a57d1bc8fa1807a5f08ea65960ca2b855ee1aee036/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-616586",
	                "Source": "/var/lib/docker/volumes/embed-certs-616586/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-616586",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-616586",
	                "name.minikube.sigs.k8s.io": "embed-certs-616586",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "73660bb5ae765a5adb2c739fef6b4530ea6a2229636bcf527ebf424e7b460de2",
	            "SandboxKey": "/var/run/docker/netns/73660bb5ae76",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-616586": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:d1:d9:a1:42:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e81bfab46f3df2dcaf4383ddbd73f7ed61981d9755f2d4e0122a1a2df6affbf8",
	                    "EndpointID": "1b25a05d8cce9717c40d1ca940b19f108a37c40d9c0e187f3952130d148f3185",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-616586",
	                        "76154eec8a12"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-616586 -n embed-certs-616586
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-616586 -n embed-certs-616586: exit status 2 (338.952397ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-616586 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-616586 logs -n 25: (1.542754904s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ old-k8s-version-264537 image list --format=json                                                                                                                          │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ pause   │ -p old-k8s-version-264537 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │                     │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable metrics-server -p no-preload-956694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │                     │
	│ stop    │ -p no-preload-956694 --alsologtostderr -v=3                                                                                                                              │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable dashboard -p no-preload-956694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p cert-expiration-164741                                                                                                                                                │ cert-expiration-164741       │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:51 UTC │
	│ image   │ no-preload-956694 image list --format=json                                                                                                                               │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ pause   │ -p no-preload-956694 --alsologtostderr -v=1                                                                                                                              │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │                     │
	│ delete  │ -p no-preload-956694                                                                                                                                                     │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p no-preload-956694                                                                                                                                                     │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p disable-driver-mounts-180932                                                                                                                                          │ disable-driver-mounts-180932 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-616586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │                     │
	│ stop    │ -p embed-certs-616586 --alsologtostderr -v=3                                                                                                                             │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-616586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538119 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ image   │ embed-certs-616586 image list --format=json                                                                                                                              │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ pause   │ -p embed-certs-616586 --alsologtostderr -v=1                                                                                                                             │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:51:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:51:34.982948  222763 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:51:34.983068  222763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:51:34.983076  222763 out.go:374] Setting ErrFile to fd 2...
	I1126 20:51:34.983081  222763 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:51:34.983336  222763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:51:34.983720  222763 out.go:368] Setting JSON to false
	I1126 20:51:34.984602  222763 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5625,"bootTime":1764184670,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:51:34.984664  222763 start.go:143] virtualization:  
	I1126 20:51:34.988580  222763 out.go:179] * [embed-certs-616586] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:51:34.991714  222763 notify.go:221] Checking for updates...
	I1126 20:51:34.992284  222763 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:51:34.995184  222763 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:51:34.998089  222763 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:51:35.001377  222763 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:51:35.004296  222763 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:51:35.007237  222763 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:51:35.011298  222763 config.go:182] Loaded profile config "embed-certs-616586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:51:35.012024  222763 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:51:35.077813  222763 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:51:35.077903  222763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:51:35.175257  222763 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:51:35.160734332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:51:35.175360  222763 docker.go:319] overlay module found
	I1126 20:51:35.178897  222763 out.go:179] * Using the docker driver based on existing profile
	I1126 20:51:34.059237  219464 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:51:34.059259  219464 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:51:34.059324  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:34.099089  219464 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:51:34.099111  219464 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:51:34.099173  219464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:51:34.113790  219464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:51:34.134241  219464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:51:34.387953  219464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:51:34.388071  219464 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:51:34.429184  219464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:51:34.579554  219464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:51:35.157208  219464 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-538119" to be "Ready" ...
	I1126 20:51:35.158092  219464 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1126 20:51:35.181036  222763 start.go:309] selected driver: docker
	I1126 20:51:35.181054  222763 start.go:927] validating driver "docker" against &{Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:51:35.181147  222763 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:51:35.181819  222763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:51:35.283037  222763 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:51:35.266387513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:51:35.283346  222763 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:51:35.283377  222763 cni.go:84] Creating CNI manager for ""
	I1126 20:51:35.283439  222763 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:51:35.283477  222763 start.go:353] cluster config:
	{Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:51:35.286704  222763 out.go:179] * Starting "embed-certs-616586" primary control-plane node in "embed-certs-616586" cluster
	I1126 20:51:35.288760  222763 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:51:35.291134  222763 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:51:35.294928  222763 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:51:35.294973  222763 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:51:35.294994  222763 cache.go:65] Caching tarball of preloaded images
	I1126 20:51:35.295075  222763 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:51:35.295088  222763 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:51:35.295327  222763 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:51:35.295460  222763 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/config.json ...
	I1126 20:51:35.332798  222763 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:51:35.332819  222763 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:51:35.332832  222763 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:51:35.332866  222763 start.go:360] acquireMachinesLock for embed-certs-616586: {Name:mka5254437f68c39e0c98d2ff47cae58581678c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:51:35.332924  222763 start.go:364] duration metric: took 41.066µs to acquireMachinesLock for "embed-certs-616586"
	I1126 20:51:35.332942  222763 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:51:35.332947  222763 fix.go:54] fixHost starting: 
	I1126 20:51:35.333200  222763 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:51:35.367540  222763 fix.go:112] recreateIfNeeded on embed-certs-616586: state=Stopped err=<nil>
	W1126 20:51:35.367569  222763 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:51:35.553883  219464 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124666254s)
	I1126 20:51:35.571247  219464 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1126 20:51:35.370884  222763 out.go:252] * Restarting existing docker container for "embed-certs-616586" ...
	I1126 20:51:35.370969  222763 cli_runner.go:164] Run: docker start embed-certs-616586
	I1126 20:51:35.677340  222763 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:51:35.698636  222763 kic.go:430] container "embed-certs-616586" state is running.
	I1126 20:51:35.699146  222763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-616586
	I1126 20:51:35.718559  222763 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/config.json ...
	I1126 20:51:35.718780  222763 machine.go:94] provisionDockerMachine start ...
	I1126 20:51:35.718845  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:35.740580  222763 main.go:143] libmachine: Using SSH client type: native
	I1126 20:51:35.740916  222763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:51:35.740925  222763 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:51:35.741654  222763 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1126 20:51:38.893253  222763 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-616586
	
	I1126 20:51:38.893274  222763 ubuntu.go:182] provisioning hostname "embed-certs-616586"
	I1126 20:51:38.893355  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:38.911175  222763 main.go:143] libmachine: Using SSH client type: native
	I1126 20:51:38.911503  222763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:51:38.911520  222763 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-616586 && echo "embed-certs-616586" | sudo tee /etc/hostname
	I1126 20:51:39.075419  222763 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-616586
	
	I1126 20:51:39.075497  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:39.093911  222763 main.go:143] libmachine: Using SSH client type: native
	I1126 20:51:39.094255  222763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:51:39.094279  222763 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-616586' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-616586/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-616586' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:51:39.242144  222763 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:51:39.242171  222763 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:51:39.242205  222763 ubuntu.go:190] setting up certificates
	I1126 20:51:39.242218  222763 provision.go:84] configureAuth start
	I1126 20:51:39.242297  222763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-616586
	I1126 20:51:39.260533  222763 provision.go:143] copyHostCerts
	I1126 20:51:39.260606  222763 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:51:39.260621  222763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:51:39.260698  222763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:51:39.260872  222763 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:51:39.260887  222763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:51:39.260924  222763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:51:39.261030  222763 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:51:39.261040  222763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:51:39.261076  222763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:51:39.261161  222763 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.embed-certs-616586 san=[127.0.0.1 192.168.85.2 embed-certs-616586 localhost minikube]
	I1126 20:51:39.549353  222763 provision.go:177] copyRemoteCerts
	I1126 20:51:39.549453  222763 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:51:39.549499  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:39.569897  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:39.678215  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:51:39.697984  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:51:39.716480  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:51:39.734318  222763 provision.go:87] duration metric: took 492.069234ms to configureAuth
	I1126 20:51:39.734386  222763 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:51:39.734601  222763 config.go:182] Loaded profile config "embed-certs-616586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:51:39.734710  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:39.751866  222763 main.go:143] libmachine: Using SSH client type: native
	I1126 20:51:39.752173  222763 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1126 20:51:39.752193  222763 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:51:35.574117  219464 addons.go:530] duration metric: took 1.573580643s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1126 20:51:35.668674  219464 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-538119" context rescaled to 1 replicas
	W1126 20:51:37.160007  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:39.162030  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	I1126 20:51:40.134935  222763 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:51:40.134964  222763 machine.go:97] duration metric: took 4.416175619s to provisionDockerMachine
	I1126 20:51:40.134985  222763 start.go:293] postStartSetup for "embed-certs-616586" (driver="docker")
	I1126 20:51:40.134997  222763 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:51:40.135095  222763 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:51:40.135160  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:40.156713  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:40.265896  222763 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:51:40.269418  222763 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:51:40.269448  222763 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:51:40.269460  222763 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:51:40.269521  222763 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:51:40.269625  222763 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:51:40.269742  222763 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:51:40.277448  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:51:40.295113  222763 start.go:296] duration metric: took 160.111239ms for postStartSetup
	I1126 20:51:40.295205  222763 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:51:40.295251  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:40.312426  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:40.415220  222763 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:51:40.420079  222763 fix.go:56] duration metric: took 5.087124917s for fixHost
	I1126 20:51:40.420106  222763 start.go:83] releasing machines lock for "embed-certs-616586", held for 5.087173121s
	I1126 20:51:40.420191  222763 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-616586
	I1126 20:51:40.436476  222763 ssh_runner.go:195] Run: cat /version.json
	I1126 20:51:40.436512  222763 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:51:40.436527  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:40.436574  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:40.456533  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:40.467682  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:40.557485  222763 ssh_runner.go:195] Run: systemctl --version
	I1126 20:51:40.651728  222763 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:51:40.691899  222763 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:51:40.696286  222763 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:51:40.696365  222763 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:51:40.704174  222763 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:51:40.704203  222763 start.go:496] detecting cgroup driver to use...
	I1126 20:51:40.704236  222763 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:51:40.704291  222763 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:51:40.719294  222763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:51:40.732386  222763 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:51:40.732448  222763 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:51:40.747871  222763 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:51:40.761243  222763 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:51:40.886870  222763 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:51:41.008368  222763 docker.go:234] disabling docker service ...
	I1126 20:51:41.008466  222763 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:51:41.024840  222763 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:51:41.038267  222763 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:51:41.153170  222763 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:51:41.270701  222763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:51:41.285626  222763 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:51:41.300508  222763 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:51:41.300613  222763 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.309572  222763 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:51:41.309680  222763 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.319652  222763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.328022  222763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.337453  222763 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:51:41.346073  222763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.355345  222763 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.363474  222763 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:51:41.372034  222763 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:51:41.379505  222763 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:51:41.386947  222763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:51:41.502558  222763 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:51:41.685225  222763 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:51:41.685306  222763 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:51:41.689354  222763 start.go:564] Will wait 60s for crictl version
	I1126 20:51:41.689421  222763 ssh_runner.go:195] Run: which crictl
	I1126 20:51:41.692882  222763 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:51:41.719592  222763 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:51:41.719681  222763 ssh_runner.go:195] Run: crio --version
	I1126 20:51:41.750281  222763 ssh_runner.go:195] Run: crio --version
	I1126 20:51:41.782914  222763 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:51:41.785907  222763 cli_runner.go:164] Run: docker network inspect embed-certs-616586 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:51:41.802005  222763 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:51:41.805667  222763 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:51:41.814939  222763 kubeadm.go:884] updating cluster {Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:51:41.815068  222763 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:51:41.815119  222763 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:51:41.858583  222763 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:51:41.858605  222763 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:51:41.858666  222763 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:51:41.888178  222763 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:51:41.888198  222763 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:51:41.888206  222763 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1126 20:51:41.888311  222763 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-616586 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:51:41.888393  222763 ssh_runner.go:195] Run: crio config
	I1126 20:51:41.939234  222763 cni.go:84] Creating CNI manager for ""
	I1126 20:51:41.939258  222763 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:51:41.939282  222763 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:51:41.939308  222763 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-616586 NodeName:embed-certs-616586 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:51:41.939434  222763 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-616586"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:51:41.939511  222763 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:51:41.947129  222763 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:51:41.947200  222763 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:51:41.954414  222763 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1126 20:51:41.967966  222763 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:51:41.980645  222763 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1126 20:51:41.992959  222763 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:51:41.996435  222763 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:51:42.005966  222763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:51:42.155984  222763 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:51:42.185341  222763 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586 for IP: 192.168.85.2
	I1126 20:51:42.185454  222763 certs.go:195] generating shared ca certs ...
	I1126 20:51:42.185493  222763 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:42.185787  222763 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:51:42.185889  222763 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:51:42.185941  222763 certs.go:257] generating profile certs ...
	I1126 20:51:42.186101  222763 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/client.key
	I1126 20:51:42.186251  222763 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key.319cfcc4
	I1126 20:51:42.186377  222763 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.key
	I1126 20:51:42.186571  222763 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:51:42.186661  222763 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:51:42.186694  222763 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:51:42.186766  222763 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:51:42.186834  222763 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:51:42.186904  222763 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:51:42.187049  222763 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:51:42.189966  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:51:42.286580  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:51:42.314719  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:51:42.338045  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:51:42.378193  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1126 20:51:42.406405  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:51:42.427072  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:51:42.454436  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/embed-certs-616586/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:51:42.480530  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:51:42.503742  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:51:42.524854  222763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:51:42.544432  222763 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:51:42.559173  222763 ssh_runner.go:195] Run: openssl version
	I1126 20:51:42.565986  222763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:51:42.574245  222763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:51:42.578961  222763 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:51:42.579026  222763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:51:42.624963  222763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:51:42.633348  222763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:51:42.641610  222763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:51:42.645558  222763 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:51:42.645622  222763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:51:42.689675  222763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:51:42.698176  222763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:51:42.706754  222763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:51:42.710637  222763 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:51:42.710720  222763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:51:42.752281  222763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:51:42.760414  222763 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:51:42.764811  222763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:51:42.805556  222763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:51:42.846461  222763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:51:42.887359  222763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:51:42.942785  222763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:51:43.018637  222763 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:51:43.096621  222763 kubeadm.go:401] StartCluster: {Name:embed-certs-616586 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-616586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:51:43.096709  222763 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:51:43.096853  222763 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:51:43.146149  222763 cri.go:89] found id: "67eef4727303c576bec7a2a74593b3b7f69b8f03f8409449791388af32fcfd49"
	I1126 20:51:43.146171  222763 cri.go:89] found id: "05600c45da34a337d755436cad09d9486b2e6abad961eca949578950d2380066"
	I1126 20:51:43.146176  222763 cri.go:89] found id: "3cd6972a6b24c555ea5bbdbb3c406b047bbe66e5a18a1e7aa5fa534b38e02cb9"
	I1126 20:51:43.146180  222763 cri.go:89] found id: "68acb68b93b72cb9c251bab9f93e45d90bb80f9e5df2a4d9840dfa88465b5ad8"
	I1126 20:51:43.146185  222763 cri.go:89] found id: ""
	I1126 20:51:43.146251  222763 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:51:43.168945  222763 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:51:43Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:51:43.169080  222763 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:51:43.181582  222763 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:51:43.181658  222763 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:51:43.181735  222763 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:51:43.194505  222763 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:51:43.195135  222763 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-616586" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:51:43.195421  222763 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-2326/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-616586" cluster setting kubeconfig missing "embed-certs-616586" context setting]
	I1126 20:51:43.195936  222763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:43.197348  222763 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:51:43.209050  222763 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1126 20:51:43.209082  222763 kubeadm.go:602] duration metric: took 27.404369ms to restartPrimaryControlPlane
	I1126 20:51:43.209112  222763 kubeadm.go:403] duration metric: took 112.501603ms to StartCluster
	I1126 20:51:43.209147  222763 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:43.209223  222763 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:51:43.210537  222763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:51:43.210793  222763 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:51:43.211123  222763 config.go:182] Loaded profile config "embed-certs-616586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:51:43.211265  222763 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:51:43.211331  222763 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-616586"
	I1126 20:51:43.211358  222763 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-616586"
	W1126 20:51:43.211368  222763 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:51:43.211374  222763 addons.go:70] Setting dashboard=true in profile "embed-certs-616586"
	I1126 20:51:43.211393  222763 host.go:66] Checking if "embed-certs-616586" exists ...
	I1126 20:51:43.211401  222763 addons.go:239] Setting addon dashboard=true in "embed-certs-616586"
	W1126 20:51:43.211410  222763 addons.go:248] addon dashboard should already be in state true
	I1126 20:51:43.211437  222763 host.go:66] Checking if "embed-certs-616586" exists ...
	I1126 20:51:43.211863  222763 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:51:43.211871  222763 addons.go:70] Setting default-storageclass=true in profile "embed-certs-616586"
	I1126 20:51:43.211884  222763 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-616586"
	I1126 20:51:43.212170  222763 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:51:43.211863  222763 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:51:43.215339  222763 out.go:179] * Verifying Kubernetes components...
	I1126 20:51:43.228201  222763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:51:43.257313  222763 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:51:43.262081  222763 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:51:43.265456  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:51:43.265482  222763 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:51:43.265570  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:43.274882  222763 addons.go:239] Setting addon default-storageclass=true in "embed-certs-616586"
	W1126 20:51:43.274904  222763 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:51:43.274930  222763 host.go:66] Checking if "embed-certs-616586" exists ...
	I1126 20:51:43.275353  222763 cli_runner.go:164] Run: docker container inspect embed-certs-616586 --format={{.State.Status}}
	I1126 20:51:43.277694  222763 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:51:43.289769  222763 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:51:43.289796  222763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:51:43.289864  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:43.313417  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:43.321825  222763 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:51:43.321845  222763 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:51:43.321903  222763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-616586
	I1126 20:51:43.333582  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:43.362964  222763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/embed-certs-616586/id_rsa Username:docker}
	I1126 20:51:43.543394  222763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:51:43.562400  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:51:43.562426  222763 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:51:43.596854  222763 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:51:43.640132  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:51:43.640162  222763 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:51:43.694753  222763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:51:43.706462  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:51:43.706485  222763 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:51:43.767031  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:51:43.767055  222763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:51:43.814482  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:51:43.814513  222763 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:51:43.885590  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:51:43.885610  222763 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:51:43.948829  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:51:43.948851  222763 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:51:43.975351  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:51:43.975377  222763 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:51:44.005955  222763 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:51:44.005981  222763 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:51:44.023068  222763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1126 20:51:41.660338  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:43.660464  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	I1126 20:51:49.117821  222763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.574392719s)
	I1126 20:51:49.117915  222763 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.521036088s)
	I1126 20:51:49.117992  222763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.423210213s)
	I1126 20:51:49.118005  222763 node_ready.go:35] waiting up to 6m0s for node "embed-certs-616586" to be "Ready" ...
	I1126 20:51:49.199183  222763 node_ready.go:49] node "embed-certs-616586" is "Ready"
	I1126 20:51:49.199257  222763 node_ready.go:38] duration metric: took 81.198547ms for node "embed-certs-616586" to be "Ready" ...
	I1126 20:51:49.199284  222763 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:51:49.199367  222763 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:51:49.403591  222763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.380470789s)
	I1126 20:51:49.403764  222763 api_server.go:72] duration metric: took 6.192938469s to wait for apiserver process to appear ...
	I1126 20:51:49.403779  222763 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:51:49.403797  222763 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:51:49.406604  222763 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-616586 addons enable metrics-server
	
	I1126 20:51:49.409803  222763 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1126 20:51:49.412756  222763 addons.go:530] duration metric: took 6.201487608s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1126 20:51:49.416104  222763 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:51:49.416128  222763 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:51:49.904321  222763 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:51:49.913137  222763 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1126 20:51:49.914478  222763 api_server.go:141] control plane version: v1.34.1
	I1126 20:51:49.914533  222763 api_server.go:131] duration metric: took 510.74556ms to wait for apiserver health ...
	I1126 20:51:49.914556  222763 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:51:49.919418  222763 system_pods.go:59] 8 kube-system pods found
	I1126 20:51:49.919490  222763 system_pods.go:61] "coredns-66bc5c9577-lmmqs" [8b9cb74e-e5f6-413d-918a-66872e539adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:51:49.919519  222763 system_pods.go:61] "etcd-embed-certs-616586" [2379b064-da28-43a0-b71d-4a9803da3169] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:51:49.919565  222763 system_pods.go:61] "kindnet-5zbx9" [d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556] Running
	I1126 20:51:49.919591  222763 system_pods.go:61] "kube-apiserver-embed-certs-616586" [6e697b4a-2458-4ef6-8c72-8c8272b80d6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:51:49.919615  222763 system_pods.go:61] "kube-controller-manager-embed-certs-616586" [a0385efe-91d4-40ed-b76c-be281d7ed831] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:51:49.919635  222763 system_pods.go:61] "kube-proxy-g5vk4" [711e6b5c-eac4-4b0c-9a50-22ddb3b73c53] Running
	I1126 20:51:49.919668  222763 system_pods.go:61] "kube-scheduler-embed-certs-616586" [08620aaf-720f-4514-b73f-6eb433363368] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:51:49.919689  222763 system_pods.go:61] "storage-provisioner" [ceee294c-4db0-4dc0-888c-e3733a2592cb] Running
	I1126 20:51:49.919708  222763 system_pods.go:74] duration metric: took 5.132304ms to wait for pod list to return data ...
	I1126 20:51:49.919727  222763 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:51:49.922595  222763 default_sa.go:45] found service account: "default"
	I1126 20:51:49.922646  222763 default_sa.go:55] duration metric: took 2.900214ms for default service account to be created ...
	I1126 20:51:49.922671  222763 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:51:49.926448  222763 system_pods.go:86] 8 kube-system pods found
	I1126 20:51:49.926519  222763 system_pods.go:89] "coredns-66bc5c9577-lmmqs" [8b9cb74e-e5f6-413d-918a-66872e539adf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:51:49.926544  222763 system_pods.go:89] "etcd-embed-certs-616586" [2379b064-da28-43a0-b71d-4a9803da3169] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:51:49.926584  222763 system_pods.go:89] "kindnet-5zbx9" [d5e7ce8f-c5d6-4180-bcf3-d3fa72eaf556] Running
	I1126 20:51:49.926612  222763 system_pods.go:89] "kube-apiserver-embed-certs-616586" [6e697b4a-2458-4ef6-8c72-8c8272b80d6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:51:49.926634  222763 system_pods.go:89] "kube-controller-manager-embed-certs-616586" [a0385efe-91d4-40ed-b76c-be281d7ed831] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:51:49.926656  222763 system_pods.go:89] "kube-proxy-g5vk4" [711e6b5c-eac4-4b0c-9a50-22ddb3b73c53] Running
	I1126 20:51:49.926692  222763 system_pods.go:89] "kube-scheduler-embed-certs-616586" [08620aaf-720f-4514-b73f-6eb433363368] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:51:49.926715  222763 system_pods.go:89] "storage-provisioner" [ceee294c-4db0-4dc0-888c-e3733a2592cb] Running
	I1126 20:51:49.926738  222763 system_pods.go:126] duration metric: took 4.048865ms to wait for k8s-apps to be running ...
	I1126 20:51:49.926760  222763 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:51:49.926843  222763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:51:49.941320  222763 system_svc.go:56] duration metric: took 14.551367ms WaitForService to wait for kubelet
	I1126 20:51:49.941348  222763 kubeadm.go:587] duration metric: took 6.730521541s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:51:49.941366  222763 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:51:49.947058  222763 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:51:49.947090  222763 node_conditions.go:123] node cpu capacity is 2
	I1126 20:51:49.947105  222763 node_conditions.go:105] duration metric: took 5.732508ms to run NodePressure ...
	I1126 20:51:49.947118  222763 start.go:242] waiting for startup goroutines ...
	I1126 20:51:49.947136  222763 start.go:247] waiting for cluster config update ...
	I1126 20:51:49.947154  222763 start.go:256] writing updated cluster config ...
	I1126 20:51:49.947451  222763 ssh_runner.go:195] Run: rm -f paused
	I1126 20:51:49.951680  222763 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:51:49.963754  222763 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lmmqs" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:51:46.160465  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:48.660829  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:51.970314  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:51:53.978775  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:51:50.662149  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:53.160039  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:55.160578  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:56.470317  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:51:58.979402  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:51:57.660176  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:51:59.660383  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:52:01.472090  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:03.969449  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:01.662018  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:52:04.160420  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:52:05.969891  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:08.469380  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:06.160629  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:52:08.660300  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:52:10.470624  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:12.971838  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:10.660909  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	W1126 20:52:13.160106  219464 node_ready.go:57] node "default-k8s-diff-port-538119" has "Ready":"False" status (will retry)
	I1126 20:52:14.672338  219464 node_ready.go:49] node "default-k8s-diff-port-538119" is "Ready"
	I1126 20:52:14.672364  219464 node_ready.go:38] duration metric: took 39.515122789s for node "default-k8s-diff-port-538119" to be "Ready" ...
	I1126 20:52:14.672377  219464 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:52:14.672436  219464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:52:14.695280  219464 api_server.go:72] duration metric: took 40.695115535s to wait for apiserver process to appear ...
	I1126 20:52:14.695303  219464 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:52:14.695322  219464 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1126 20:52:14.703791  219464 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1126 20:52:14.705016  219464 api_server.go:141] control plane version: v1.34.1
	I1126 20:52:14.705048  219464 api_server.go:131] duration metric: took 9.738522ms to wait for apiserver health ...
	I1126 20:52:14.705057  219464 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:52:14.718581  219464 system_pods.go:59] 8 kube-system pods found
	I1126 20:52:14.718668  219464 system_pods.go:61] "coredns-66bc5c9577-whx45" [4c930cb6-3a88-453d-87b2-982b117252c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:52:14.718691  219464 system_pods.go:61] "etcd-default-k8s-diff-port-538119" [350b0a49-cb40-4e7e-979e-2603cd98f40a] Running
	I1126 20:52:14.718729  219464 system_pods.go:61] "kindnet-ts8sn" [689c63b4-0698-4849-b955-38da30ca9d27] Running
	I1126 20:52:14.718749  219464 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-538119" [1075acc3-91b8-413d-8236-1458b8b2f755] Running
	I1126 20:52:14.718768  219464 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-538119" [9b1dc77b-3053-45d5-9c72-f9f755941068] Running
	I1126 20:52:14.718791  219464 system_pods.go:61] "kube-proxy-sp5l4" [fe1ccf23-f465-4b93-b09e-c5a07258326f] Running
	I1126 20:52:14.718810  219464 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-538119" [641a56bf-2138-4b46-b797-b787b49f2505] Running
	I1126 20:52:14.718838  219464 system_pods.go:61] "storage-provisioner" [c2af4292-99c1-4828-a90f-f165d964345f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:52:14.718864  219464 system_pods.go:74] duration metric: took 13.800402ms to wait for pod list to return data ...
	I1126 20:52:14.718886  219464 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:52:14.728500  219464 default_sa.go:45] found service account: "default"
	I1126 20:52:14.728522  219464 default_sa.go:55] duration metric: took 9.61581ms for default service account to be created ...
	I1126 20:52:14.728532  219464 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:52:14.731523  219464 system_pods.go:86] 8 kube-system pods found
	I1126 20:52:14.731559  219464 system_pods.go:89] "coredns-66bc5c9577-whx45" [4c930cb6-3a88-453d-87b2-982b117252c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:52:14.731572  219464 system_pods.go:89] "etcd-default-k8s-diff-port-538119" [350b0a49-cb40-4e7e-979e-2603cd98f40a] Running
	I1126 20:52:14.731581  219464 system_pods.go:89] "kindnet-ts8sn" [689c63b4-0698-4849-b955-38da30ca9d27] Running
	I1126 20:52:14.731587  219464 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538119" [1075acc3-91b8-413d-8236-1458b8b2f755] Running
	I1126 20:52:14.731592  219464 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538119" [9b1dc77b-3053-45d5-9c72-f9f755941068] Running
	I1126 20:52:14.731600  219464 system_pods.go:89] "kube-proxy-sp5l4" [fe1ccf23-f465-4b93-b09e-c5a07258326f] Running
	I1126 20:52:14.731605  219464 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538119" [641a56bf-2138-4b46-b797-b787b49f2505] Running
	I1126 20:52:14.731615  219464 system_pods.go:89] "storage-provisioner" [c2af4292-99c1-4828-a90f-f165d964345f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:52:14.731637  219464 retry.go:31] will retry after 273.011473ms: missing components: kube-dns
	I1126 20:52:15.009512  219464 system_pods.go:86] 8 kube-system pods found
	I1126 20:52:15.009608  219464 system_pods.go:89] "coredns-66bc5c9577-whx45" [4c930cb6-3a88-453d-87b2-982b117252c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:52:15.009637  219464 system_pods.go:89] "etcd-default-k8s-diff-port-538119" [350b0a49-cb40-4e7e-979e-2603cd98f40a] Running
	I1126 20:52:15.009678  219464 system_pods.go:89] "kindnet-ts8sn" [689c63b4-0698-4849-b955-38da30ca9d27] Running
	I1126 20:52:15.009707  219464 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538119" [1075acc3-91b8-413d-8236-1458b8b2f755] Running
	I1126 20:52:15.009729  219464 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538119" [9b1dc77b-3053-45d5-9c72-f9f755941068] Running
	I1126 20:52:15.009750  219464 system_pods.go:89] "kube-proxy-sp5l4" [fe1ccf23-f465-4b93-b09e-c5a07258326f] Running
	I1126 20:52:15.009784  219464 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538119" [641a56bf-2138-4b46-b797-b787b49f2505] Running
	I1126 20:52:15.009815  219464 system_pods.go:89] "storage-provisioner" [c2af4292-99c1-4828-a90f-f165d964345f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:52:15.009848  219464 retry.go:31] will retry after 359.24819ms: missing components: kube-dns
	I1126 20:52:15.375376  219464 system_pods.go:86] 8 kube-system pods found
	I1126 20:52:15.375411  219464 system_pods.go:89] "coredns-66bc5c9577-whx45" [4c930cb6-3a88-453d-87b2-982b117252c1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:52:15.375422  219464 system_pods.go:89] "etcd-default-k8s-diff-port-538119" [350b0a49-cb40-4e7e-979e-2603cd98f40a] Running
	I1126 20:52:15.375428  219464 system_pods.go:89] "kindnet-ts8sn" [689c63b4-0698-4849-b955-38da30ca9d27] Running
	I1126 20:52:15.375432  219464 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538119" [1075acc3-91b8-413d-8236-1458b8b2f755] Running
	I1126 20:52:15.375437  219464 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538119" [9b1dc77b-3053-45d5-9c72-f9f755941068] Running
	I1126 20:52:15.375441  219464 system_pods.go:89] "kube-proxy-sp5l4" [fe1ccf23-f465-4b93-b09e-c5a07258326f] Running
	I1126 20:52:15.375445  219464 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538119" [641a56bf-2138-4b46-b797-b787b49f2505] Running
	I1126 20:52:15.375451  219464 system_pods.go:89] "storage-provisioner" [c2af4292-99c1-4828-a90f-f165d964345f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:52:15.375471  219464 retry.go:31] will retry after 321.000099ms: missing components: kube-dns
	I1126 20:52:15.700475  219464 system_pods.go:86] 8 kube-system pods found
	I1126 20:52:15.700515  219464 system_pods.go:89] "coredns-66bc5c9577-whx45" [4c930cb6-3a88-453d-87b2-982b117252c1] Running
	I1126 20:52:15.700522  219464 system_pods.go:89] "etcd-default-k8s-diff-port-538119" [350b0a49-cb40-4e7e-979e-2603cd98f40a] Running
	I1126 20:52:15.700528  219464 system_pods.go:89] "kindnet-ts8sn" [689c63b4-0698-4849-b955-38da30ca9d27] Running
	I1126 20:52:15.700533  219464 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538119" [1075acc3-91b8-413d-8236-1458b8b2f755] Running
	I1126 20:52:15.700537  219464 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538119" [9b1dc77b-3053-45d5-9c72-f9f755941068] Running
	I1126 20:52:15.700541  219464 system_pods.go:89] "kube-proxy-sp5l4" [fe1ccf23-f465-4b93-b09e-c5a07258326f] Running
	I1126 20:52:15.700549  219464 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538119" [641a56bf-2138-4b46-b797-b787b49f2505] Running
	I1126 20:52:15.700557  219464 system_pods.go:89] "storage-provisioner" [c2af4292-99c1-4828-a90f-f165d964345f] Running
	I1126 20:52:15.700564  219464 system_pods.go:126] duration metric: took 972.026892ms to wait for k8s-apps to be running ...
	I1126 20:52:15.700571  219464 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:52:15.700627  219464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:52:15.713628  219464 system_svc.go:56] duration metric: took 13.046881ms WaitForService to wait for kubelet
	I1126 20:52:15.713660  219464 kubeadm.go:587] duration metric: took 41.713500378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:52:15.713681  219464 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:52:15.716632  219464 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:52:15.716666  219464 node_conditions.go:123] node cpu capacity is 2
	I1126 20:52:15.716680  219464 node_conditions.go:105] duration metric: took 2.993332ms to run NodePressure ...
	I1126 20:52:15.716693  219464 start.go:242] waiting for startup goroutines ...
	I1126 20:52:15.716709  219464 start.go:247] waiting for cluster config update ...
	I1126 20:52:15.716721  219464 start.go:256] writing updated cluster config ...
	I1126 20:52:15.717049  219464 ssh_runner.go:195] Run: rm -f paused
	I1126 20:52:15.720621  219464 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:52:15.725302  219464 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-whx45" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:15.730259  219464 pod_ready.go:94] pod "coredns-66bc5c9577-whx45" is "Ready"
	I1126 20:52:15.730285  219464 pod_ready.go:86] duration metric: took 4.955687ms for pod "coredns-66bc5c9577-whx45" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:15.732420  219464 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:15.737319  219464 pod_ready.go:94] pod "etcd-default-k8s-diff-port-538119" is "Ready"
	I1126 20:52:15.737346  219464 pod_ready.go:86] duration metric: took 4.900082ms for pod "etcd-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:15.740048  219464 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:15.745223  219464 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-538119" is "Ready"
	I1126 20:52:15.745248  219464 pod_ready.go:86] duration metric: took 5.174578ms for pod "kube-apiserver-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:15.747682  219464 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:16.125507  219464 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-538119" is "Ready"
	I1126 20:52:16.125540  219464 pod_ready.go:86] duration metric: took 377.827174ms for pod "kube-controller-manager-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:16.325821  219464 pod_ready.go:83] waiting for pod "kube-proxy-sp5l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:16.724803  219464 pod_ready.go:94] pod "kube-proxy-sp5l4" is "Ready"
	I1126 20:52:16.724839  219464 pod_ready.go:86] duration metric: took 398.994782ms for pod "kube-proxy-sp5l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:16.925552  219464 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:17.325707  219464 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-538119" is "Ready"
	I1126 20:52:17.325734  219464 pod_ready.go:86] duration metric: took 400.155489ms for pod "kube-scheduler-default-k8s-diff-port-538119" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:17.325748  219464 pod_ready.go:40] duration metric: took 1.605092601s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:52:17.387806  219464 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 20:52:17.390990  219464 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-538119" cluster and "default" namespace by default
	W1126 20:52:15.469635  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:17.478306  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:19.969433  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	W1126 20:52:21.974474  222763 pod_ready.go:104] pod "coredns-66bc5c9577-lmmqs" is not "Ready", error: <nil>
	I1126 20:52:23.469596  222763 pod_ready.go:94] pod "coredns-66bc5c9577-lmmqs" is "Ready"
	I1126 20:52:23.469625  222763 pod_ready.go:86] duration metric: took 33.505842541s for pod "coredns-66bc5c9577-lmmqs" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.472554  222763 pod_ready.go:83] waiting for pod "etcd-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.477444  222763 pod_ready.go:94] pod "etcd-embed-certs-616586" is "Ready"
	I1126 20:52:23.477477  222763 pod_ready.go:86] duration metric: took 4.895759ms for pod "etcd-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.480057  222763 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.489102  222763 pod_ready.go:94] pod "kube-apiserver-embed-certs-616586" is "Ready"
	I1126 20:52:23.489125  222763 pod_ready.go:86] duration metric: took 9.045768ms for pod "kube-apiserver-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.492057  222763 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.667618  222763 pod_ready.go:94] pod "kube-controller-manager-embed-certs-616586" is "Ready"
	I1126 20:52:23.667648  222763 pod_ready.go:86] duration metric: took 175.562166ms for pod "kube-controller-manager-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:23.867881  222763 pod_ready.go:83] waiting for pod "kube-proxy-g5vk4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:24.267216  222763 pod_ready.go:94] pod "kube-proxy-g5vk4" is "Ready"
	I1126 20:52:24.267244  222763 pod_ready.go:86] duration metric: took 399.333009ms for pod "kube-proxy-g5vk4" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:24.467299  222763 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:24.867500  222763 pod_ready.go:94] pod "kube-scheduler-embed-certs-616586" is "Ready"
	I1126 20:52:24.867525  222763 pod_ready.go:86] duration metric: took 400.196197ms for pod "kube-scheduler-embed-certs-616586" in "kube-system" namespace to be "Ready" or be gone ...
	I1126 20:52:24.867536  222763 pod_ready.go:40] duration metric: took 34.915821928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:52:24.926575  222763 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 20:52:24.929672  222763 out.go:179] * Done! kubectl is now configured to use "embed-certs-616586" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 20:52:17 embed-certs-616586 crio[656]: time="2025-11-26T20:52:17.697182513Z" level=info msg="Removed container a2620c5baf2bef5a72e39cab3148246802712fdfc3bf61481de2408857e5d361: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m/dashboard-metrics-scraper" id=09c25211-18bc-40f9-aa99-5772b8945ddc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:52:19 embed-certs-616586 conmon[1156]: conmon 079110bf8f15d397c0fd <ninfo>: container 1159 exited with status 1
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.664652378Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b6db4d7e-1a2d-4b4e-b7d7-408c7dc33f98 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.666294743Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ebb0bb88-2d37-446c-b805-e76ac67f6324 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.667491762Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d3ceb1ce-bdae-4553-9f03-28ad8cee30f8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.667605728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.67260125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.672882982Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/51433b93fa430336631c2504f3944c7bf01230273dac388286f2f069356d746f/merged/etc/passwd: no such file or directory"
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.673006695Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/51433b93fa430336631c2504f3944c7bf01230273dac388286f2f069356d746f/merged/etc/group: no such file or directory"
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.67330322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.695702412Z" level=info msg="Created container f678d3447e490743b2ee0d2e868f230525b963a8a7eda39a7562f91729595a9b: kube-system/storage-provisioner/storage-provisioner" id=d3ceb1ce-bdae-4553-9f03-28ad8cee30f8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.696748872Z" level=info msg="Starting container: f678d3447e490743b2ee0d2e868f230525b963a8a7eda39a7562f91729595a9b" id=32ead568-4aa8-4b40-980e-744f4ac9110c name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.698713508Z" level=info msg="Started container" PID=1658 containerID=f678d3447e490743b2ee0d2e868f230525b963a8a7eda39a7562f91729595a9b description=kube-system/storage-provisioner/storage-provisioner id=32ead568-4aa8-4b40-980e-744f4ac9110c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d277d237c5cf2d5cde75449070dfcca767bb24ca23a425236a97eb54a0092a2b
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.434313879Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.438341823Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.438375783Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.438393284Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.44116826Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.44119779Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.441217941Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.444211954Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.444243978Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.444265672Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.447196286Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.447226251Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f678d3447e490       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   d277d237c5cf2       storage-provisioner                          kube-system
	b242c58cd6a92       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   01e663060f2d7       dashboard-metrics-scraper-6ffb444bf9-zg22m   kubernetes-dashboard
	cc0cb0d7adeca       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   37d631cbcf8eb       kubernetes-dashboard-855c9754f9-6hlql        kubernetes-dashboard
	4a3e6ee186809       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   d7d567f8cd9ae       busybox                                      default
	05caffd7fc383       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   5554e674892ee       coredns-66bc5c9577-lmmqs                     kube-system
	079110bf8f15d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   d277d237c5cf2       storage-provisioner                          kube-system
	ff0908c8190d6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   43207882fe1ef       kindnet-5zbx9                                kube-system
	ebaf108a1d8ad       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   70d186a754c99       kube-proxy-g5vk4                             kube-system
	67eef4727303c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   050cc4d913a48       etcd-embed-certs-616586                      kube-system
	05600c45da34a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           57 seconds ago      Running             kube-apiserver              1                   795b86a0af16b       kube-apiserver-embed-certs-616586            kube-system
	3cd6972a6b24c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   a5010197a2d6f       kube-controller-manager-embed-certs-616586   kube-system
	68acb68b93b72       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   7aa27e6811dc7       kube-scheduler-embed-certs-616586            kube-system
	
	
	==> coredns [05caffd7fc383997b08372234b64425e04fa2dbf03830dbf95855408fa9b65c0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53426 - 24136 "HINFO IN 8845277975998085674.400065335123723975. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.034076022s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-616586
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-616586
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=embed-certs-616586
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_50_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:50:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-616586
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:52:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:52:18 +0000   Wed, 26 Nov 2025 20:50:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:52:18 +0000   Wed, 26 Nov 2025 20:50:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:52:18 +0000   Wed, 26 Nov 2025 20:50:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:52:18 +0000   Wed, 26 Nov 2025 20:51:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-616586
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                dbf22ae5-72fe-466d-9fb8-0a6db34daaea
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-lmmqs                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-embed-certs-616586                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-5zbx9                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-embed-certs-616586             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-embed-certs-616586    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-g5vk4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-embed-certs-616586             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zg22m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6hlql         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m15s              kube-proxy       
	  Normal   Starting                 50s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m22s              kubelet          Node embed-certs-616586 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m22s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m22s              kubelet          Node embed-certs-616586 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m22s              kubelet          Node embed-certs-616586 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m22s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m18s              node-controller  Node embed-certs-616586 event: Registered Node embed-certs-616586 in Controller
	  Normal   NodeReady                96s                kubelet          Node embed-certs-616586 status is now: NodeReady
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 58s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node embed-certs-616586 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node embed-certs-616586 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node embed-certs-616586 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                node-controller  Node embed-certs-616586 event: Registered Node embed-certs-616586 in Controller
	
	
	==> dmesg <==
	[Nov26 20:25] overlayfs: idmapped layers are currently not supported
	[Nov26 20:27] overlayfs: idmapped layers are currently not supported
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	[Nov26 20:48] overlayfs: idmapped layers are currently not supported
	[Nov26 20:49] overlayfs: idmapped layers are currently not supported
	[Nov26 20:50] overlayfs: idmapped layers are currently not supported
	[Nov26 20:51] overlayfs: idmapped layers are currently not supported
	[ +24.066506] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [67eef4727303c576bec7a2a74593b3b7f69b8f03f8409449791388af32fcfd49] <==
	{"level":"warn","ts":"2025-11-26T20:51:45.837131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:45.864581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:45.889445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:45.923039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:45.949787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:45.998733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.021039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.056148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.084814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.113064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.145465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.196427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.232907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.282100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.313750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.339888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.368551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.394130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.434435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.451819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.481520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.518116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.550089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.608324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.670723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38646","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:52:40 up  1:34,  0 user,  load average: 2.97, 3.15, 2.58
	Linux embed-certs-616586 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ff0908c8190d668949024b3a2d898917d6596966a0f2c2198d6de6d5c823461b] <==
	I1126 20:51:49.239575       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:51:49.239969       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:51:49.240117       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:51:49.240158       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:51:49.240205       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:51:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:51:49.429417       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:51:49.429486       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:51:49.429529       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:51:49.430515       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:52:19.429644       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:52:19.430792       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:52:19.430795       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:52:19.430963       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1126 20:52:21.030249       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:52:21.030352       1 metrics.go:72] Registering metrics
	I1126 20:52:21.030451       1 controller.go:711] "Syncing nftables rules"
	I1126 20:52:29.434005       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:52:29.434060       1 main.go:301] handling current node
	I1126 20:52:39.438021       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:52:39.438054       1 main.go:301] handling current node
	
	
	==> kube-apiserver [05600c45da34a337d755436cad09d9486b2e6abad961eca949578950d2380066] <==
	I1126 20:51:47.792815       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1126 20:51:47.792910       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:51:47.792927       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:51:47.792935       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:51:47.792941       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:51:47.803343       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:51:47.853139       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:51:47.860072       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1126 20:51:47.860131       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:51:47.880481       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:51:47.880518       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1126 20:51:47.880764       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1126 20:51:47.880777       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1126 20:51:47.916352       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:51:48.354885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:51:48.506864       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:51:48.580068       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:51:48.878951       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:51:49.109119       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:51:49.185744       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:51:49.358696       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.149.146"}
	I1126 20:51:49.396653       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.174.20"}
	I1126 20:51:52.258741       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:51:52.358809       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:51:52.409486       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3cd6972a6b24c555ea5bbdbb3c406b047bbe66e5a18a1e7aa5fa534b38e02cb9] <==
	I1126 20:51:51.986253       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:51:51.989546       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:51:51.989613       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:51:51.991790       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 20:51:51.994988       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1126 20:51:51.997192       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1126 20:51:51.997231       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1126 20:51:51.997264       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1126 20:51:52.000558       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1126 20:51:52.001825       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1126 20:51:52.001870       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 20:51:52.001888       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:51:52.002081       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 20:51:52.002313       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:51:52.003271       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:51:52.003627       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:51:52.006757       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:51:52.006814       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:51:52.011670       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:51:52.011809       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:51:52.011852       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:51:52.011864       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:51:52.011871       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:51:52.013469       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:51:52.014656       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [ebaf108a1d8ad6369fcdb2bd0e441964826b9647f9e876db927e0728e70f0a7c] <==
	I1126 20:51:49.561989       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:51:49.771542       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:51:49.880421       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:51:49.880536       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1126 20:51:49.880636       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:51:49.935291       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:51:49.935433       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:51:49.945843       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:51:49.946159       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:51:49.946181       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:51:49.954326       1 config.go:200] "Starting service config controller"
	I1126 20:51:49.954352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:51:49.954376       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:51:49.954381       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:51:49.954413       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:51:49.954424       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:51:49.955086       1 config.go:309] "Starting node config controller"
	I1126 20:51:49.955104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:51:49.955110       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:51:50.055334       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:51:50.055440       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:51:50.055470       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [68acb68b93b72cb9c251bab9f93e45d90bb80f9e5df2a4d9840dfa88465b5ad8] <==
	I1126 20:51:46.378172       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:51:50.824910       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:51:50.824941       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:51:50.831265       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1126 20:51:50.831367       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1126 20:51:50.831426       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:51:50.831469       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:51:50.831516       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:51:50.831546       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:51:50.831744       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:51:50.831855       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:51:50.932225       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:51:50.932315       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1126 20:51:50.932349       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:51:52 embed-certs-616586 kubelet[788]: I1126 20:51:52.614205     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zg22m\" (UID: \"bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m"
	Nov 26 20:51:52 embed-certs-616586 kubelet[788]: I1126 20:51:52.714851     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/11e8fba4-bcc7-4952-a344-fcd4f0f6240a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-6hlql\" (UID: \"11e8fba4-bcc7-4952-a344-fcd4f0f6240a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6hlql"
	Nov 26 20:51:52 embed-certs-616586 kubelet[788]: I1126 20:51:52.715122     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv9qf\" (UniqueName: \"kubernetes.io/projected/11e8fba4-bcc7-4952-a344-fcd4f0f6240a-kube-api-access-mv9qf\") pod \"kubernetes-dashboard-855c9754f9-6hlql\" (UID: \"11e8fba4-bcc7-4952-a344-fcd4f0f6240a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6hlql"
	Nov 26 20:51:52 embed-certs-616586 kubelet[788]: W1126 20:51:52.851284     788 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/crio-01e663060f2d73c3720001cbfc5b79c53047336658fb6b7ea7e93647dd490fcc WatchSource:0}: Error finding container 01e663060f2d73c3720001cbfc5b79c53047336658fb6b7ea7e93647dd490fcc: Status 404 returned error can't find the container with id 01e663060f2d73c3720001cbfc5b79c53047336658fb6b7ea7e93647dd490fcc
	Nov 26 20:51:53 embed-certs-616586 kubelet[788]: I1126 20:51:53.094306     788 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 26 20:51:53 embed-certs-616586 kubelet[788]: W1126 20:51:53.161039     788 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/crio-37d631cbcf8ebfd77a54adbea29d27f025f149f51822250cbdb0f0412de8789d WatchSource:0}: Error finding container 37d631cbcf8ebfd77a54adbea29d27f025f149f51822250cbdb0f0412de8789d: Status 404 returned error can't find the container with id 37d631cbcf8ebfd77a54adbea29d27f025f149f51822250cbdb0f0412de8789d
	Nov 26 20:51:57 embed-certs-616586 kubelet[788]: I1126 20:51:57.570614     788 scope.go:117] "RemoveContainer" containerID="ee4c200a57ad9a2f98924af4a9d48118b8f36942084f48db81f30744a865594e"
	Nov 26 20:51:58 embed-certs-616586 kubelet[788]: I1126 20:51:58.575447     788 scope.go:117] "RemoveContainer" containerID="ee4c200a57ad9a2f98924af4a9d48118b8f36942084f48db81f30744a865594e"
	Nov 26 20:51:58 embed-certs-616586 kubelet[788]: I1126 20:51:58.575776     788 scope.go:117] "RemoveContainer" containerID="a2620c5baf2bef5a72e39cab3148246802712fdfc3bf61481de2408857e5d361"
	Nov 26 20:51:58 embed-certs-616586 kubelet[788]: E1126 20:51:58.575921     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zg22m_kubernetes-dashboard(bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m" podUID="bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c"
	Nov 26 20:51:59 embed-certs-616586 kubelet[788]: I1126 20:51:59.578571     788 scope.go:117] "RemoveContainer" containerID="a2620c5baf2bef5a72e39cab3148246802712fdfc3bf61481de2408857e5d361"
	Nov 26 20:51:59 embed-certs-616586 kubelet[788]: E1126 20:51:59.578726     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zg22m_kubernetes-dashboard(bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m" podUID="bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c"
	Nov 26 20:52:02 embed-certs-616586 kubelet[788]: I1126 20:52:02.815652     788 scope.go:117] "RemoveContainer" containerID="a2620c5baf2bef5a72e39cab3148246802712fdfc3bf61481de2408857e5d361"
	Nov 26 20:52:02 embed-certs-616586 kubelet[788]: E1126 20:52:02.816407     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zg22m_kubernetes-dashboard(bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m" podUID="bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c"
	Nov 26 20:52:17 embed-certs-616586 kubelet[788]: I1126 20:52:17.469638     788 scope.go:117] "RemoveContainer" containerID="a2620c5baf2bef5a72e39cab3148246802712fdfc3bf61481de2408857e5d361"
	Nov 26 20:52:17 embed-certs-616586 kubelet[788]: I1126 20:52:17.655584     788 scope.go:117] "RemoveContainer" containerID="a2620c5baf2bef5a72e39cab3148246802712fdfc3bf61481de2408857e5d361"
	Nov 26 20:52:17 embed-certs-616586 kubelet[788]: I1126 20:52:17.655959     788 scope.go:117] "RemoveContainer" containerID="b242c58cd6a92f6ae5ee1f8d498bfe274cd49d08c6f8e168776f53723a9db999"
	Nov 26 20:52:17 embed-certs-616586 kubelet[788]: E1126 20:52:17.656157     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zg22m_kubernetes-dashboard(bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m" podUID="bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c"
	Nov 26 20:52:17 embed-certs-616586 kubelet[788]: I1126 20:52:17.696143     788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6hlql" podStartSLOduration=16.411288158 podStartE2EDuration="25.696121637s" podCreationTimestamp="2025-11-26 20:51:52 +0000 UTC" firstStartedPulling="2025-11-26 20:51:53.167763171 +0000 UTC m=+10.984311421" lastFinishedPulling="2025-11-26 20:52:02.45259665 +0000 UTC m=+20.269144900" observedRunningTime="2025-11-26 20:52:02.622020932 +0000 UTC m=+20.438569199" watchObservedRunningTime="2025-11-26 20:52:17.696121637 +0000 UTC m=+35.512669895"
	Nov 26 20:52:19 embed-certs-616586 kubelet[788]: I1126 20:52:19.663737     788 scope.go:117] "RemoveContainer" containerID="079110bf8f15d397c0fdba7593f783a31a000fcd6b92de2b4477a09731aab5bb"
	Nov 26 20:52:22 embed-certs-616586 kubelet[788]: I1126 20:52:22.815157     788 scope.go:117] "RemoveContainer" containerID="b242c58cd6a92f6ae5ee1f8d498bfe274cd49d08c6f8e168776f53723a9db999"
	Nov 26 20:52:22 embed-certs-616586 kubelet[788]: E1126 20:52:22.815339     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zg22m_kubernetes-dashboard(bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m" podUID="bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c"
	Nov 26 20:52:37 embed-certs-616586 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:52:37 embed-certs-616586 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:52:37 embed-certs-616586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cc0cb0d7adecab0e806790ede0bafa00cebde36ff2976b7770c516f4f5ebb8c0] <==
	2025/11/26 20:52:02 Using namespace: kubernetes-dashboard
	2025/11/26 20:52:02 Using in-cluster config to connect to apiserver
	2025/11/26 20:52:02 Using secret token for csrf signing
	2025/11/26 20:52:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:52:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:52:02 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 20:52:02 Generating JWE encryption key
	2025/11/26 20:52:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:52:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:52:03 Initializing JWE encryption key from synchronized object
	2025/11/26 20:52:03 Creating in-cluster Sidecar client
	2025/11/26 20:52:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:52:03 Serving insecurely on HTTP port: 9090
	2025/11/26 20:52:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:52:02 Starting overwatch
	
	
	==> storage-provisioner [079110bf8f15d397c0fdba7593f783a31a000fcd6b92de2b4477a09731aab5bb] <==
	I1126 20:51:49.271711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:52:19.274262       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f678d3447e490743b2ee0d2e868f230525b963a8a7eda39a7562f91729595a9b] <==
	I1126 20:52:19.715937       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:52:19.731934       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:52:19.732061       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:52:19.734680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:23.189816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:27.450359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:31.048238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:34.101427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:37.124881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:37.135505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:52:37.135805       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:52:37.137240       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-616586_fdafab16-02d0-4c7b-a6f4-4de0b0845a82!
	I1126 20:52:37.140296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74d92165-f92e-42f6-bb51-54e16bfb29a8", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-616586_fdafab16-02d0-4c7b-a6f4-4de0b0845a82 became leader
	W1126 20:52:37.152096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:37.160322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:52:37.239957       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-616586_fdafab16-02d0-4c7b-a6f4-4de0b0845a82!
	W1126 20:52:39.164335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:39.181990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-616586 -n embed-certs-616586
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-616586 -n embed-certs-616586: exit status 2 (569.664365ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-616586 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-616586
helpers_test.go:243: (dbg) docker inspect embed-certs-616586:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d",
	        "Created": "2025-11-26T20:49:51.803939719Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 222890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:51:35.407395004Z",
	            "FinishedAt": "2025-11-26T20:51:34.290370504Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/hosts",
	        "LogPath": "/var/lib/docker/containers/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d-json.log",
	        "Name": "/embed-certs-616586",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-616586:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-616586",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d",
	                "LowerDir": "/var/lib/docker/overlay2/ee40ec00c8e4f4c52d4005a57d1bc8fa1807a5f08ea65960ca2b855ee1aee036-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ee40ec00c8e4f4c52d4005a57d1bc8fa1807a5f08ea65960ca2b855ee1aee036/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ee40ec00c8e4f4c52d4005a57d1bc8fa1807a5f08ea65960ca2b855ee1aee036/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ee40ec00c8e4f4c52d4005a57d1bc8fa1807a5f08ea65960ca2b855ee1aee036/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-616586",
	                "Source": "/var/lib/docker/volumes/embed-certs-616586/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-616586",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-616586",
	                "name.minikube.sigs.k8s.io": "embed-certs-616586",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "73660bb5ae765a5adb2c739fef6b4530ea6a2229636bcf527ebf424e7b460de2",
	            "SandboxKey": "/var/run/docker/netns/73660bb5ae76",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-616586": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:d1:d9:a1:42:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e81bfab46f3df2dcaf4383ddbd73f7ed61981d9755f2d4e0122a1a2df6affbf8",
	                    "EndpointID": "1b25a05d8cce9717c40d1ca940b19f108a37c40d9c0e187f3952130d148f3185",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-616586",
	                        "76154eec8a12"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-616586 -n embed-certs-616586
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-616586 -n embed-certs-616586: exit status 2 (443.532334ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-616586 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-616586 logs -n 25: (1.313509389s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ delete  │ -p old-k8s-version-264537                                                                                                                                                │ old-k8s-version-264537       │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:48 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:48 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable metrics-server -p no-preload-956694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │                     │
	│ stop    │ -p no-preload-956694 --alsologtostderr -v=3                                                                                                                              │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable dashboard -p no-preload-956694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p cert-expiration-164741                                                                                                                                                │ cert-expiration-164741       │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:51 UTC │
	│ image   │ no-preload-956694 image list --format=json                                                                                                                               │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ pause   │ -p no-preload-956694 --alsologtostderr -v=1                                                                                                                              │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │                     │
	│ delete  │ -p no-preload-956694                                                                                                                                                     │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p no-preload-956694                                                                                                                                                     │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p disable-driver-mounts-180932                                                                                                                                          │ disable-driver-mounts-180932 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-616586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │                     │
	│ stop    │ -p embed-certs-616586 --alsologtostderr -v=3                                                                                                                             │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-616586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538119 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ image   │ embed-certs-616586 image list --format=json                                                                                                                              │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ pause   │ -p embed-certs-616586 --alsologtostderr -v=1                                                                                                                             │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:52:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:52:40.627885  226403 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:52:40.628472  226403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:52:40.628507  226403 out.go:374] Setting ErrFile to fd 2...
	I1126 20:52:40.628528  226403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:52:40.628869  226403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:52:40.629307  226403 out.go:368] Setting JSON to false
	I1126 20:52:40.630345  226403 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5691,"bootTime":1764184670,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:52:40.630446  226403 start.go:143] virtualization:  
	I1126 20:52:40.634061  226403 out.go:179] * [default-k8s-diff-port-538119] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:52:40.637223  226403 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:52:40.637303  226403 notify.go:221] Checking for updates...
	I1126 20:52:40.640971  226403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:52:40.644478  226403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:52:40.647591  226403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:52:40.650419  226403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:52:40.653508  226403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:52:40.656921  226403 config.go:182] Loaded profile config "default-k8s-diff-port-538119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:52:40.657601  226403 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:52:40.703009  226403 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:52:40.703136  226403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:52:40.785385  226403 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:52:40.775224563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:52:40.785494  226403 docker.go:319] overlay module found
	I1126 20:52:40.789348  226403 out.go:179] * Using the docker driver based on existing profile
	I1126 20:52:40.792325  226403 start.go:309] selected driver: docker
	I1126 20:52:40.792353  226403 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-538119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538119 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:52:40.792459  226403 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:52:40.793152  226403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:52:40.873847  226403 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:52:40.862476687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:52:40.874282  226403 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:52:40.874311  226403 cni.go:84] Creating CNI manager for ""
	I1126 20:52:40.874373  226403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:52:40.874416  226403 start.go:353] cluster config:
	{Name:default-k8s-diff-port-538119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:52:40.877431  226403 out.go:179] * Starting "default-k8s-diff-port-538119" primary control-plane node in "default-k8s-diff-port-538119" cluster
	I1126 20:52:40.880319  226403 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:52:40.883225  226403 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:52:40.886180  226403 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:52:40.886220  226403 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:52:40.886229  226403 cache.go:65] Caching tarball of preloaded images
	I1126 20:52:40.886294  226403 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:52:40.886321  226403 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:52:40.886331  226403 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:52:40.886450  226403 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/config.json ...
	I1126 20:52:40.909181  226403 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:52:40.909205  226403 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:52:40.909221  226403 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:52:40.909251  226403 start.go:360] acquireMachinesLock for default-k8s-diff-port-538119: {Name:mkdef3fabf2e513d8e713b1948a2979a9bdfa526 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:52:40.909314  226403 start.go:364] duration metric: took 33.861µs to acquireMachinesLock for "default-k8s-diff-port-538119"
	I1126 20:52:40.909336  226403 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:52:40.909345  226403 fix.go:54] fixHost starting: 
	I1126 20:52:40.909604  226403 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538119 --format={{.State.Status}}
	I1126 20:52:40.930043  226403 fix.go:112] recreateIfNeeded on default-k8s-diff-port-538119: state=Stopped err=<nil>
	W1126 20:52:40.930072  226403 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 26 20:52:17 embed-certs-616586 crio[656]: time="2025-11-26T20:52:17.697182513Z" level=info msg="Removed container a2620c5baf2bef5a72e39cab3148246802712fdfc3bf61481de2408857e5d361: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m/dashboard-metrics-scraper" id=09c25211-18bc-40f9-aa99-5772b8945ddc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:52:19 embed-certs-616586 conmon[1156]: conmon 079110bf8f15d397c0fd <ninfo>: container 1159 exited with status 1
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.664652378Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b6db4d7e-1a2d-4b4e-b7d7-408c7dc33f98 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.666294743Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ebb0bb88-2d37-446c-b805-e76ac67f6324 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.667491762Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d3ceb1ce-bdae-4553-9f03-28ad8cee30f8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.667605728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.67260125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.672882982Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/51433b93fa430336631c2504f3944c7bf01230273dac388286f2f069356d746f/merged/etc/passwd: no such file or directory"
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.673006695Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/51433b93fa430336631c2504f3944c7bf01230273dac388286f2f069356d746f/merged/etc/group: no such file or directory"
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.67330322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.695702412Z" level=info msg="Created container f678d3447e490743b2ee0d2e868f230525b963a8a7eda39a7562f91729595a9b: kube-system/storage-provisioner/storage-provisioner" id=d3ceb1ce-bdae-4553-9f03-28ad8cee30f8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.696748872Z" level=info msg="Starting container: f678d3447e490743b2ee0d2e868f230525b963a8a7eda39a7562f91729595a9b" id=32ead568-4aa8-4b40-980e-744f4ac9110c name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:52:19 embed-certs-616586 crio[656]: time="2025-11-26T20:52:19.698713508Z" level=info msg="Started container" PID=1658 containerID=f678d3447e490743b2ee0d2e868f230525b963a8a7eda39a7562f91729595a9b description=kube-system/storage-provisioner/storage-provisioner id=32ead568-4aa8-4b40-980e-744f4ac9110c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d277d237c5cf2d5cde75449070dfcca767bb24ca23a425236a97eb54a0092a2b
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.434313879Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.438341823Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.438375783Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.438393284Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.44116826Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.44119779Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.441217941Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.444211954Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.444243978Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.444265672Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.447196286Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:52:29 embed-certs-616586 crio[656]: time="2025-11-26T20:52:29.447226251Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f678d3447e490       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   d277d237c5cf2       storage-provisioner                          kube-system
	b242c58cd6a92       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago       Exited              dashboard-metrics-scraper   2                   01e663060f2d7       dashboard-metrics-scraper-6ffb444bf9-zg22m   kubernetes-dashboard
	cc0cb0d7adeca       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   37d631cbcf8eb       kubernetes-dashboard-855c9754f9-6hlql        kubernetes-dashboard
	4a3e6ee186809       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   d7d567f8cd9ae       busybox                                      default
	05caffd7fc383       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   5554e674892ee       coredns-66bc5c9577-lmmqs                     kube-system
	079110bf8f15d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   d277d237c5cf2       storage-provisioner                          kube-system
	ff0908c8190d6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   43207882fe1ef       kindnet-5zbx9                                kube-system
	ebaf108a1d8ad       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   70d186a754c99       kube-proxy-g5vk4                             kube-system
	67eef4727303c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   050cc4d913a48       etcd-embed-certs-616586                      kube-system
	05600c45da34a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   795b86a0af16b       kube-apiserver-embed-certs-616586            kube-system
	3cd6972a6b24c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   a5010197a2d6f       kube-controller-manager-embed-certs-616586   kube-system
	68acb68b93b72       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   7aa27e6811dc7       kube-scheduler-embed-certs-616586            kube-system
	
	
	==> coredns [05caffd7fc383997b08372234b64425e04fa2dbf03830dbf95855408fa9b65c0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53426 - 24136 "HINFO IN 8845277975998085674.400065335123723975. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.034076022s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-616586
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-616586
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=embed-certs-616586
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_50_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:50:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-616586
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:52:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:52:18 +0000   Wed, 26 Nov 2025 20:50:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:52:18 +0000   Wed, 26 Nov 2025 20:50:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:52:18 +0000   Wed, 26 Nov 2025 20:50:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:52:18 +0000   Wed, 26 Nov 2025 20:51:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-616586
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                dbf22ae5-72fe-466d-9fb8-0a6db34daaea
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-lmmqs                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-embed-certs-616586                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-5zbx9                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-embed-certs-616586             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-embed-certs-616586    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-g5vk4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-embed-certs-616586             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zg22m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6hlql         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m18s              kube-proxy       
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m25s              kubelet          Node embed-certs-616586 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m25s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s              kubelet          Node embed-certs-616586 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m25s              kubelet          Node embed-certs-616586 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m25s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s              node-controller  Node embed-certs-616586 event: Registered Node embed-certs-616586 in Controller
	  Normal   NodeReady                99s                kubelet          Node embed-certs-616586 status is now: NodeReady
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node embed-certs-616586 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node embed-certs-616586 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node embed-certs-616586 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                node-controller  Node embed-certs-616586 event: Registered Node embed-certs-616586 in Controller
	
	
	==> dmesg <==
	[Nov26 20:25] overlayfs: idmapped layers are currently not supported
	[Nov26 20:27] overlayfs: idmapped layers are currently not supported
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	[Nov26 20:48] overlayfs: idmapped layers are currently not supported
	[Nov26 20:49] overlayfs: idmapped layers are currently not supported
	[Nov26 20:50] overlayfs: idmapped layers are currently not supported
	[Nov26 20:51] overlayfs: idmapped layers are currently not supported
	[ +24.066506] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [67eef4727303c576bec7a2a74593b3b7f69b8f03f8409449791388af32fcfd49] <==
	{"level":"warn","ts":"2025-11-26T20:51:45.837131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:45.864581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:45.889445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:45.923039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:45.949787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:45.998733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.021039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.056148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.084814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.113064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.145465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.196427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.232907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.282100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.313750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.339888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.368551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.394130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.434435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.451819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.481520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.518116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.550089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.608324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:51:46.670723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38646","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:52:43 up  1:34,  0 user,  load average: 2.97, 3.15, 2.58
	Linux embed-certs-616586 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ff0908c8190d668949024b3a2d898917d6596966a0f2c2198d6de6d5c823461b] <==
	I1126 20:51:49.239575       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:51:49.239969       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:51:49.240117       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:51:49.240158       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:51:49.240205       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:51:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:51:49.429417       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:51:49.429486       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:51:49.429529       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:51:49.430515       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:52:19.429644       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1126 20:52:19.430792       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:52:19.430795       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:52:19.430963       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1126 20:52:21.030249       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:52:21.030352       1 metrics.go:72] Registering metrics
	I1126 20:52:21.030451       1 controller.go:711] "Syncing nftables rules"
	I1126 20:52:29.434005       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:52:29.434060       1 main.go:301] handling current node
	I1126 20:52:39.438021       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1126 20:52:39.438054       1 main.go:301] handling current node
	
	
	==> kube-apiserver [05600c45da34a337d755436cad09d9486b2e6abad961eca949578950d2380066] <==
	I1126 20:51:47.792815       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1126 20:51:47.792910       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:51:47.792927       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:51:47.792935       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:51:47.792941       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:51:47.803343       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:51:47.853139       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:51:47.860072       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1126 20:51:47.860131       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:51:47.880481       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:51:47.880518       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1126 20:51:47.880764       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1126 20:51:47.880777       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1126 20:51:47.916352       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:51:48.354885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:51:48.506864       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:51:48.580068       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:51:48.878951       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:51:49.109119       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:51:49.185744       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:51:49.358696       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.149.146"}
	I1126 20:51:49.396653       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.174.20"}
	I1126 20:51:52.258741       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:51:52.358809       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:51:52.409486       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3cd6972a6b24c555ea5bbdbb3c406b047bbe66e5a18a1e7aa5fa534b38e02cb9] <==
	I1126 20:51:51.986253       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:51:51.989546       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:51:51.989613       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:51:51.991790       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 20:51:51.994988       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1126 20:51:51.997192       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1126 20:51:51.997231       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1126 20:51:51.997264       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1126 20:51:52.000558       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1126 20:51:52.001825       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1126 20:51:52.001870       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1126 20:51:52.001888       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:51:52.002081       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 20:51:52.002313       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:51:52.003271       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:51:52.003627       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:51:52.006757       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1126 20:51:52.006814       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:51:52.011670       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:51:52.011809       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:51:52.011852       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:51:52.011864       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:51:52.011871       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:51:52.013469       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:51:52.014656       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [ebaf108a1d8ad6369fcdb2bd0e441964826b9647f9e876db927e0728e70f0a7c] <==
	I1126 20:51:49.561989       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:51:49.771542       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:51:49.880421       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:51:49.880536       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1126 20:51:49.880636       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:51:49.935291       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:51:49.935433       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:51:49.945843       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:51:49.946159       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:51:49.946181       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:51:49.954326       1 config.go:200] "Starting service config controller"
	I1126 20:51:49.954352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:51:49.954376       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:51:49.954381       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:51:49.954413       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:51:49.954424       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:51:49.955086       1 config.go:309] "Starting node config controller"
	I1126 20:51:49.955104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:51:49.955110       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:51:50.055334       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:51:50.055440       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:51:50.055470       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [68acb68b93b72cb9c251bab9f93e45d90bb80f9e5df2a4d9840dfa88465b5ad8] <==
	I1126 20:51:46.378172       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:51:50.824910       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:51:50.824941       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:51:50.831265       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1126 20:51:50.831367       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1126 20:51:50.831426       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:51:50.831469       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:51:50.831516       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:51:50.831546       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:51:50.831744       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:51:50.831855       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:51:50.932225       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:51:50.932315       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1126 20:51:50.932349       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:51:52 embed-certs-616586 kubelet[788]: I1126 20:51:52.614205     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zg22m\" (UID: \"bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m"
	Nov 26 20:51:52 embed-certs-616586 kubelet[788]: I1126 20:51:52.714851     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/11e8fba4-bcc7-4952-a344-fcd4f0f6240a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-6hlql\" (UID: \"11e8fba4-bcc7-4952-a344-fcd4f0f6240a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6hlql"
	Nov 26 20:51:52 embed-certs-616586 kubelet[788]: I1126 20:51:52.715122     788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv9qf\" (UniqueName: \"kubernetes.io/projected/11e8fba4-bcc7-4952-a344-fcd4f0f6240a-kube-api-access-mv9qf\") pod \"kubernetes-dashboard-855c9754f9-6hlql\" (UID: \"11e8fba4-bcc7-4952-a344-fcd4f0f6240a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6hlql"
	Nov 26 20:51:52 embed-certs-616586 kubelet[788]: W1126 20:51:52.851284     788 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/crio-01e663060f2d73c3720001cbfc5b79c53047336658fb6b7ea7e93647dd490fcc WatchSource:0}: Error finding container 01e663060f2d73c3720001cbfc5b79c53047336658fb6b7ea7e93647dd490fcc: Status 404 returned error can't find the container with id 01e663060f2d73c3720001cbfc5b79c53047336658fb6b7ea7e93647dd490fcc
	Nov 26 20:51:53 embed-certs-616586 kubelet[788]: I1126 20:51:53.094306     788 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 26 20:51:53 embed-certs-616586 kubelet[788]: W1126 20:51:53.161039     788 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/76154eec8a121e1758faf53d86779838a42a3fe8267c765253c0803ad368fc6d/crio-37d631cbcf8ebfd77a54adbea29d27f025f149f51822250cbdb0f0412de8789d WatchSource:0}: Error finding container 37d631cbcf8ebfd77a54adbea29d27f025f149f51822250cbdb0f0412de8789d: Status 404 returned error can't find the container with id 37d631cbcf8ebfd77a54adbea29d27f025f149f51822250cbdb0f0412de8789d
	Nov 26 20:51:57 embed-certs-616586 kubelet[788]: I1126 20:51:57.570614     788 scope.go:117] "RemoveContainer" containerID="ee4c200a57ad9a2f98924af4a9d48118b8f36942084f48db81f30744a865594e"
	Nov 26 20:51:58 embed-certs-616586 kubelet[788]: I1126 20:51:58.575447     788 scope.go:117] "RemoveContainer" containerID="ee4c200a57ad9a2f98924af4a9d48118b8f36942084f48db81f30744a865594e"
	Nov 26 20:51:58 embed-certs-616586 kubelet[788]: I1126 20:51:58.575776     788 scope.go:117] "RemoveContainer" containerID="a2620c5baf2bef5a72e39cab3148246802712fdfc3bf61481de2408857e5d361"
	Nov 26 20:51:58 embed-certs-616586 kubelet[788]: E1126 20:51:58.575921     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zg22m_kubernetes-dashboard(bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m" podUID="bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c"
	Nov 26 20:51:59 embed-certs-616586 kubelet[788]: I1126 20:51:59.578571     788 scope.go:117] "RemoveContainer" containerID="a2620c5baf2bef5a72e39cab3148246802712fdfc3bf61481de2408857e5d361"
	Nov 26 20:51:59 embed-certs-616586 kubelet[788]: E1126 20:51:59.578726     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zg22m_kubernetes-dashboard(bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m" podUID="bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c"
	Nov 26 20:52:02 embed-certs-616586 kubelet[788]: I1126 20:52:02.815652     788 scope.go:117] "RemoveContainer" containerID="a2620c5baf2bef5a72e39cab3148246802712fdfc3bf61481de2408857e5d361"
	Nov 26 20:52:02 embed-certs-616586 kubelet[788]: E1126 20:52:02.816407     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zg22m_kubernetes-dashboard(bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m" podUID="bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c"
	Nov 26 20:52:17 embed-certs-616586 kubelet[788]: I1126 20:52:17.469638     788 scope.go:117] "RemoveContainer" containerID="a2620c5baf2bef5a72e39cab3148246802712fdfc3bf61481de2408857e5d361"
	Nov 26 20:52:17 embed-certs-616586 kubelet[788]: I1126 20:52:17.655584     788 scope.go:117] "RemoveContainer" containerID="a2620c5baf2bef5a72e39cab3148246802712fdfc3bf61481de2408857e5d361"
	Nov 26 20:52:17 embed-certs-616586 kubelet[788]: I1126 20:52:17.655959     788 scope.go:117] "RemoveContainer" containerID="b242c58cd6a92f6ae5ee1f8d498bfe274cd49d08c6f8e168776f53723a9db999"
	Nov 26 20:52:17 embed-certs-616586 kubelet[788]: E1126 20:52:17.656157     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zg22m_kubernetes-dashboard(bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m" podUID="bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c"
	Nov 26 20:52:17 embed-certs-616586 kubelet[788]: I1126 20:52:17.696143     788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6hlql" podStartSLOduration=16.411288158 podStartE2EDuration="25.696121637s" podCreationTimestamp="2025-11-26 20:51:52 +0000 UTC" firstStartedPulling="2025-11-26 20:51:53.167763171 +0000 UTC m=+10.984311421" lastFinishedPulling="2025-11-26 20:52:02.45259665 +0000 UTC m=+20.269144900" observedRunningTime="2025-11-26 20:52:02.622020932 +0000 UTC m=+20.438569199" watchObservedRunningTime="2025-11-26 20:52:17.696121637 +0000 UTC m=+35.512669895"
	Nov 26 20:52:19 embed-certs-616586 kubelet[788]: I1126 20:52:19.663737     788 scope.go:117] "RemoveContainer" containerID="079110bf8f15d397c0fdba7593f783a31a000fcd6b92de2b4477a09731aab5bb"
	Nov 26 20:52:22 embed-certs-616586 kubelet[788]: I1126 20:52:22.815157     788 scope.go:117] "RemoveContainer" containerID="b242c58cd6a92f6ae5ee1f8d498bfe274cd49d08c6f8e168776f53723a9db999"
	Nov 26 20:52:22 embed-certs-616586 kubelet[788]: E1126 20:52:22.815339     788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zg22m_kubernetes-dashboard(bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zg22m" podUID="bd15c2a1-bc97-4b19-9a3b-f3ee85f3514c"
	Nov 26 20:52:37 embed-certs-616586 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:52:37 embed-certs-616586 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:52:37 embed-certs-616586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cc0cb0d7adecab0e806790ede0bafa00cebde36ff2976b7770c516f4f5ebb8c0] <==
	2025/11/26 20:52:02 Using namespace: kubernetes-dashboard
	2025/11/26 20:52:02 Using in-cluster config to connect to apiserver
	2025/11/26 20:52:02 Using secret token for csrf signing
	2025/11/26 20:52:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:52:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:52:02 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 20:52:02 Generating JWE encryption key
	2025/11/26 20:52:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:52:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:52:03 Initializing JWE encryption key from synchronized object
	2025/11/26 20:52:03 Creating in-cluster Sidecar client
	2025/11/26 20:52:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:52:03 Serving insecurely on HTTP port: 9090
	2025/11/26 20:52:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:52:02 Starting overwatch
	
	
	==> storage-provisioner [079110bf8f15d397c0fdba7593f783a31a000fcd6b92de2b4477a09731aab5bb] <==
	I1126 20:51:49.271711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:52:19.274262       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f678d3447e490743b2ee0d2e868f230525b963a8a7eda39a7562f91729595a9b] <==
	I1126 20:52:19.715937       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:52:19.731934       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:52:19.732061       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:52:19.734680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:23.189816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:27.450359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:31.048238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:34.101427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:37.124881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:37.135505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:52:37.135805       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:52:37.137240       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-616586_fdafab16-02d0-4c7b-a6f4-4de0b0845a82!
	I1126 20:52:37.140296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"74d92165-f92e-42f6-bb51-54e16bfb29a8", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-616586_fdafab16-02d0-4c7b-a6f4-4de0b0845a82 became leader
	W1126 20:52:37.152096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:37.160322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:52:37.239957       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-616586_fdafab16-02d0-4c7b-a6f4-4de0b0845a82!
	W1126 20:52:39.164335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:39.181990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:41.186281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:41.191659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:43.194943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:52:43.201817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-616586 -n embed-certs-616586
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-616586 -n embed-certs-616586: exit status 2 (336.86223ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-616586 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-583801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-583801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (326.496759ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:53:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-583801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-583801
helpers_test.go:243: (dbg) docker inspect newest-cni-583801:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c",
	        "Created": "2025-11-26T20:52:53.985671529Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 229167,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:52:54.056889828Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c/hosts",
	        "LogPath": "/var/lib/docker/containers/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c-json.log",
	        "Name": "/newest-cni-583801",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-583801:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-583801",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c",
	                "LowerDir": "/var/lib/docker/overlay2/f23a4729fa6ded3a1a8ccc66cde534e546b45b2bd8d04f55047b513a2d3a9186-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f23a4729fa6ded3a1a8ccc66cde534e546b45b2bd8d04f55047b513a2d3a9186/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f23a4729fa6ded3a1a8ccc66cde534e546b45b2bd8d04f55047b513a2d3a9186/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f23a4729fa6ded3a1a8ccc66cde534e546b45b2bd8d04f55047b513a2d3a9186/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-583801",
	                "Source": "/var/lib/docker/volumes/newest-cni-583801/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-583801",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-583801",
	                "name.minikube.sigs.k8s.io": "newest-cni-583801",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32993ca20c7ef056263ea6f8397a53c4e48eda0d294623875bb18ad5832006b5",
	            "SandboxKey": "/var/run/docker/netns/32993ca20c7e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-583801": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:d4:4e:85:63:b8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e35a642217b331a1c1ac5d84616493887df16b6946bf83ba7ad44b2d7f7799d7",
	                    "EndpointID": "97787929a51ecf82b69ac60e6a6bcc6169bc5a46fe1bc834abd023a515ed7d44",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-583801",
	                        "c96a716e290f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-583801 -n newest-cni-583801
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-583801 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-583801 logs -n 25: (1.207398686s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-956694 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ addons  │ enable dashboard -p no-preload-956694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p cert-expiration-164741                                                                                                                                                                                                                     │ cert-expiration-164741       │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:49 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:49 UTC │ 26 Nov 25 20:51 UTC │
	│ image   │ no-preload-956694 image list --format=json                                                                                                                                                                                                    │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ pause   │ -p no-preload-956694 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │                     │
	│ delete  │ -p no-preload-956694                                                                                                                                                                                                                          │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p no-preload-956694                                                                                                                                                                                                                          │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p disable-driver-mounts-180932                                                                                                                                                                                                               │ disable-driver-mounts-180932 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-616586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │                     │
	│ stop    │ -p embed-certs-616586 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-616586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538119 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ image   │ embed-certs-616586 image list --format=json                                                                                                                                                                                                   │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ pause   │ -p embed-certs-616586 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ delete  │ -p embed-certs-616586                                                                                                                                                                                                                         │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ delete  │ -p embed-certs-616586                                                                                                                                                                                                                         │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ start   │ -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-583801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:52:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:52:47.273151  228196 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:52:47.273355  228196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:52:47.273377  228196 out.go:374] Setting ErrFile to fd 2...
	I1126 20:52:47.273394  228196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:52:47.273785  228196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:52:47.281273  228196 out.go:368] Setting JSON to false
	I1126 20:52:47.282245  228196 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5698,"bootTime":1764184670,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:52:47.282323  228196 start.go:143] virtualization:  
	I1126 20:52:47.286464  228196 out.go:179] * [newest-cni-583801] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:52:47.290250  228196 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:52:47.290431  228196 notify.go:221] Checking for updates...
	I1126 20:52:47.294325  228196 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:52:47.297645  228196 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:52:47.300928  228196 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:52:47.304195  228196 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:52:47.307401  228196 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:52:47.311205  228196 config.go:182] Loaded profile config "default-k8s-diff-port-538119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:52:47.311381  228196 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:52:47.351727  228196 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:52:47.351894  228196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:52:47.458056  228196 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-26 20:52:47.446705183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:52:47.458154  228196 docker.go:319] overlay module found
	I1126 20:52:47.461612  228196 out.go:179] * Using the docker driver based on user configuration
	I1126 20:52:47.465188  228196 start.go:309] selected driver: docker
	I1126 20:52:47.465207  228196 start.go:927] validating driver "docker" against <nil>
	I1126 20:52:47.465220  228196 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:52:47.465886  228196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:52:47.553336  228196 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-26 20:52:47.540415555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:52:47.553493  228196 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1126 20:52:47.553511  228196 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1126 20:52:47.553726  228196 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:52:47.556962  228196 out.go:179] * Using Docker driver with root privileges
	I1126 20:52:47.559955  228196 cni.go:84] Creating CNI manager for ""
	I1126 20:52:47.560031  228196 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:52:47.560041  228196 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 20:52:47.560127  228196 start.go:353] cluster config:
	{Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:52:47.563342  228196 out.go:179] * Starting "newest-cni-583801" primary control-plane node in "newest-cni-583801" cluster
	I1126 20:52:47.566265  228196 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:52:47.569295  228196 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:52:47.572085  228196 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:52:47.572146  228196 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:52:47.572156  228196 cache.go:65] Caching tarball of preloaded images
	I1126 20:52:47.572244  228196 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:52:47.572253  228196 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:52:47.572364  228196 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/config.json ...
	I1126 20:52:47.572382  228196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/config.json: {Name:mk5dd0e46928b23d802f1fec2fc166ac3bdaf603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:52:47.572528  228196 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:52:47.603451  228196 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:52:47.603469  228196 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:52:47.603482  228196 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:52:47.603512  228196 start.go:360] acquireMachinesLock for newest-cni-583801: {Name:mk5a5c4e74106a93e4d595458226ad93568e2c2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:52:47.603604  228196 start.go:364] duration metric: took 77.996µs to acquireMachinesLock for "newest-cni-583801"
	I1126 20:52:47.603626  228196 start.go:93] Provisioning new machine with config: &{Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:52:47.603692  228196 start.go:125] createHost starting for "" (driver="docker")
	I1126 20:52:46.022278  226403 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:52:46.022296  226403 machine.go:97] duration metric: took 4.652377148s to provisionDockerMachine
	I1126 20:52:46.022307  226403 start.go:293] postStartSetup for "default-k8s-diff-port-538119" (driver="docker")
	I1126 20:52:46.022318  226403 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:52:46.022380  226403 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:52:46.022430  226403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:52:46.049575  226403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:52:46.160809  226403 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:52:46.165079  226403 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:52:46.165109  226403 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:52:46.165120  226403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:52:46.165172  226403 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:52:46.165254  226403 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:52:46.165365  226403 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:52:46.173688  226403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:52:46.201903  226403 start.go:296] duration metric: took 179.581157ms for postStartSetup
	I1126 20:52:46.202015  226403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:52:46.202080  226403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:52:46.222899  226403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:52:46.331194  226403 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:52:46.337222  226403 fix.go:56] duration metric: took 5.427869425s for fixHost
	I1126 20:52:46.337245  226403 start.go:83] releasing machines lock for "default-k8s-diff-port-538119", held for 5.427919188s
	I1126 20:52:46.337315  226403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-538119
	I1126 20:52:46.356947  226403 ssh_runner.go:195] Run: cat /version.json
	I1126 20:52:46.357108  226403 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:52:46.357212  226403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:52:46.357361  226403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:52:46.378847  226403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:52:46.411512  226403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:52:46.489727  226403 ssh_runner.go:195] Run: systemctl --version
	I1126 20:52:46.598997  226403 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:52:46.648397  226403 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:52:46.654558  226403 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:52:46.654678  226403 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:52:46.663699  226403 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:52:46.663778  226403 start.go:496] detecting cgroup driver to use...
	I1126 20:52:46.663825  226403 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:52:46.663908  226403 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:52:46.682798  226403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:52:46.702297  226403 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:52:46.702372  226403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:52:46.719839  226403 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:52:46.737097  226403 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:52:46.917697  226403 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:52:47.059059  226403 docker.go:234] disabling docker service ...
	I1126 20:52:47.059121  226403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:52:47.074939  226403 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:52:47.089804  226403 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:52:47.235594  226403 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:52:47.381508  226403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:52:47.394832  226403 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:52:47.410327  226403 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:52:47.410393  226403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:52:47.419718  226403 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:52:47.419782  226403 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:52:47.429240  226403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:52:47.438233  226403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:52:47.447985  226403 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:52:47.457418  226403 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:52:47.467421  226403 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:52:47.476603  226403 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:52:47.485379  226403 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:52:47.499386  226403 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:52:47.511023  226403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:52:47.652817  226403 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:52:47.875860  226403 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:52:47.875926  226403 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:52:47.880395  226403 start.go:564] Will wait 60s for crictl version
	I1126 20:52:47.880453  226403 ssh_runner.go:195] Run: which crictl
	I1126 20:52:47.886909  226403 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:52:47.923802  226403 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:52:47.923887  226403 ssh_runner.go:195] Run: crio --version
	I1126 20:52:47.957462  226403 ssh_runner.go:195] Run: crio --version
	I1126 20:52:48.002582  226403 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:52:48.006577  226403 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-538119 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:52:48.025344  226403 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1126 20:52:48.030786  226403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:52:48.049540  226403 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-538119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538119 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:52:48.049667  226403 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:52:48.049736  226403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:52:48.100829  226403 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:52:48.100854  226403 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:52:48.100949  226403 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:52:48.159890  226403 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:52:48.159911  226403 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:52:48.159919  226403 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1126 20:52:48.160022  226403 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-538119 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:52:48.160093  226403 ssh_runner.go:195] Run: crio config
	I1126 20:52:48.222822  226403 cni.go:84] Creating CNI manager for ""
	I1126 20:52:48.222846  226403 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:52:48.222891  226403 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1126 20:52:48.222921  226403 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-538119 NodeName:default-k8s-diff-port-538119 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:52:48.223090  226403 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-538119"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:52:48.223185  226403 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:52:48.234762  226403 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:52:48.234873  226403 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:52:48.268452  226403 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1126 20:52:48.290413  226403 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:52:48.311274  226403 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1126 20:52:48.325242  226403 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:52:48.329144  226403 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:52:48.339237  226403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:52:48.509997  226403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:52:48.534490  226403 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119 for IP: 192.168.76.2
	I1126 20:52:48.534517  226403 certs.go:195] generating shared ca certs ...
	I1126 20:52:48.534532  226403 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:52:48.534665  226403 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:52:48.534726  226403 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:52:48.534739  226403 certs.go:257] generating profile certs ...
	I1126 20:52:48.534832  226403 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.key
	I1126 20:52:48.534907  226403 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.key.08a6970d
	I1126 20:52:48.534951  226403 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/proxy-client.key
	I1126 20:52:48.535068  226403 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:52:48.535104  226403 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:52:48.535116  226403 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:52:48.535143  226403 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:52:48.535174  226403 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:52:48.535201  226403 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:52:48.535254  226403 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:52:48.535862  226403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:52:48.556753  226403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:52:48.617311  226403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:52:48.654584  226403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:52:48.703237  226403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1126 20:52:48.761428  226403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:52:48.811388  226403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:52:48.868250  226403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1126 20:52:48.887065  226403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:52:48.905377  226403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:52:48.927259  226403 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:52:48.955509  226403 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:52:48.980789  226403 ssh_runner.go:195] Run: openssl version
	I1126 20:52:48.987693  226403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:52:48.997124  226403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:52:49.001380  226403 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:52:49.001485  226403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:52:49.045453  226403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:52:49.054352  226403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:52:49.064857  226403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:52:49.069127  226403 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:52:49.069241  226403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:52:49.111888  226403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:52:49.120609  226403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:52:49.129812  226403 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:52:49.134407  226403 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:52:49.134529  226403 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:52:49.180147  226403 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:52:49.188894  226403 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:52:49.193192  226403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:52:49.236131  226403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:52:49.290444  226403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:52:49.349328  226403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:52:49.447870  226403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:52:49.622621  226403 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:52:49.791583  226403 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-538119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-538119 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:52:49.791725  226403 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:52:49.791816  226403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:52:49.931347  226403 cri.go:89] found id: "ebea4280eb674478aadbae605d2061b7c068854e5d7ec7d5b4fb24f16fe0cfb9"
	I1126 20:52:49.931373  226403 cri.go:89] found id: "fc58d11ea93321e33cff7333a94130c39e21c09f52f801603b1a6a3a6ad98d31"
	I1126 20:52:49.931378  226403 cri.go:89] found id: "220d1f4d36b36e980115005c48030f8c1bcbf01b34d094b15f89d89ca0ae205f"
	I1126 20:52:49.931384  226403 cri.go:89] found id: "192c4461955e12aeca35caebeb96aaa6b7c140e0c20bce5b442625309d73063a"
	I1126 20:52:49.931387  226403 cri.go:89] found id: ""
	I1126 20:52:49.931471  226403 ssh_runner.go:195] Run: sudo runc list -f json
	W1126 20:52:49.950659  226403 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:52:49Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:52:49.950790  226403 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:52:49.968263  226403 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:52:49.968286  226403 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:52:49.968364  226403 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:52:49.984734  226403 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:52:49.985212  226403 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-538119" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:52:49.985356  226403 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-2326/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-538119" cluster setting kubeconfig missing "default-k8s-diff-port-538119" context setting]
	I1126 20:52:49.985713  226403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:52:49.987335  226403 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:52:50.002293  226403 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1126 20:52:50.002339  226403 kubeadm.go:602] duration metric: took 34.03737ms to restartPrimaryControlPlane
	I1126 20:52:50.002376  226403 kubeadm.go:403] duration metric: took 210.802055ms to StartCluster
	I1126 20:52:50.002393  226403 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:52:50.002474  226403 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:52:50.003237  226403 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:52:50.003524  226403 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:52:50.003952  226403 config.go:182] Loaded profile config "default-k8s-diff-port-538119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:52:50.003932  226403 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:52:50.004026  226403 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-538119"
	I1126 20:52:50.004040  226403 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-538119"
	W1126 20:52:50.004047  226403 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:52:50.004071  226403 host.go:66] Checking if "default-k8s-diff-port-538119" exists ...
	I1126 20:52:50.004524  226403 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538119 --format={{.State.Status}}
	I1126 20:52:50.004678  226403 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-538119"
	I1126 20:52:50.004690  226403 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-538119"
	W1126 20:52:50.004707  226403 addons.go:248] addon dashboard should already be in state true
	I1126 20:52:50.004727  226403 host.go:66] Checking if "default-k8s-diff-port-538119" exists ...
	I1126 20:52:50.005102  226403 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538119 --format={{.State.Status}}
	I1126 20:52:50.005431  226403 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-538119"
	I1126 20:52:50.005455  226403 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-538119"
	I1126 20:52:50.005762  226403 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538119 --format={{.State.Status}}
	I1126 20:52:50.012210  226403 out.go:179] * Verifying Kubernetes components...
	I1126 20:52:50.021502  226403 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:52:50.075479  226403 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:52:50.075694  226403 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-538119"
	W1126 20:52:50.075706  226403 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:52:50.075729  226403 host.go:66] Checking if "default-k8s-diff-port-538119" exists ...
	I1126 20:52:50.076168  226403 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538119 --format={{.State.Status}}
	I1126 20:52:50.076321  226403 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:52:50.081365  226403 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:52:50.081465  226403 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:52:50.081485  226403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:52:50.081555  226403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:52:50.084426  226403 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:52:50.084456  226403 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:52:50.084529  226403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:52:50.125258  226403 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:52:50.125281  226403 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:52:50.125345  226403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:52:50.142107  226403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:52:50.155654  226403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:52:50.177189  226403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:52:50.465532  226403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:52:50.500545  226403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:52:50.522680  226403 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:52:50.522707  226403 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:52:50.538069  226403 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:52:50.602752  226403 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:52:50.602774  226403 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:52:47.607093  228196 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1126 20:52:47.607296  228196 start.go:159] libmachine.API.Create for "newest-cni-583801" (driver="docker")
	I1126 20:52:47.607321  228196 client.go:173] LocalClient.Create starting
	I1126 20:52:47.607396  228196 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem
	I1126 20:52:47.607429  228196 main.go:143] libmachine: Decoding PEM data...
	I1126 20:52:47.607445  228196 main.go:143] libmachine: Parsing certificate...
	I1126 20:52:47.607504  228196 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem
	I1126 20:52:47.607521  228196 main.go:143] libmachine: Decoding PEM data...
	I1126 20:52:47.607535  228196 main.go:143] libmachine: Parsing certificate...
	I1126 20:52:47.607887  228196 cli_runner.go:164] Run: docker network inspect newest-cni-583801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1126 20:52:47.631588  228196 cli_runner.go:211] docker network inspect newest-cni-583801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1126 20:52:47.631683  228196 network_create.go:284] running [docker network inspect newest-cni-583801] to gather additional debugging logs...
	I1126 20:52:47.631700  228196 cli_runner.go:164] Run: docker network inspect newest-cni-583801
	W1126 20:52:47.646836  228196 cli_runner.go:211] docker network inspect newest-cni-583801 returned with exit code 1
	I1126 20:52:47.646863  228196 network_create.go:287] error running [docker network inspect newest-cni-583801]: docker network inspect newest-cni-583801: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-583801 not found
	I1126 20:52:47.646875  228196 network_create.go:289] output of [docker network inspect newest-cni-583801]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-583801 not found
	
	** /stderr **
	I1126 20:52:47.646968  228196 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:52:47.666592  228196 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20cb65a83ad5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:26:47:2b:2e:03} reservation:<nil>}
	I1126 20:52:47.666931  228196 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-16105a7ff776 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c6:75:f6:9d:ad:ac} reservation:<nil>}
	I1126 20:52:47.667244  228196 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f1c69ea9dfa3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:b7:bf:8a:44:80} reservation:<nil>}
	I1126 20:52:47.667488  228196 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-58099cffa65b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ba:fd:f8:90:f2:b0} reservation:<nil>}
	I1126 20:52:47.667841  228196 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4e000}
	I1126 20:52:47.667858  228196 network_create.go:124] attempt to create docker network newest-cni-583801 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1126 20:52:47.667911  228196 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-583801 newest-cni-583801
	I1126 20:52:47.737338  228196 network_create.go:108] docker network newest-cni-583801 192.168.85.0/24 created
	I1126 20:52:47.737368  228196 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-583801" container
	I1126 20:52:47.737453  228196 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1126 20:52:47.755894  228196 cli_runner.go:164] Run: docker volume create newest-cni-583801 --label name.minikube.sigs.k8s.io=newest-cni-583801 --label created_by.minikube.sigs.k8s.io=true
	I1126 20:52:47.784378  228196 oci.go:103] Successfully created a docker volume newest-cni-583801
	I1126 20:52:47.784459  228196 cli_runner.go:164] Run: docker run --rm --name newest-cni-583801-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-583801 --entrypoint /usr/bin/test -v newest-cni-583801:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1126 20:52:48.366277  228196 oci.go:107] Successfully prepared a docker volume newest-cni-583801
	I1126 20:52:48.366347  228196 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:52:48.366366  228196 kic.go:194] Starting extracting preloaded images to volume ...
	I1126 20:52:48.366442  228196 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-583801:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1126 20:52:50.763180  226403 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:52:50.763206  226403 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:52:50.859677  226403 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:52:50.859702  226403 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:52:50.990397  226403 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:52:50.990423  226403 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:52:51.090197  226403 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:52:51.090226  226403 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:52:51.149699  226403 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:52:51.149725  226403 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:52:51.194465  226403 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:52:51.194491  226403 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:52:51.227593  226403 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:52:51.227619  226403 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:52:51.256961  226403 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:52:53.851484  228196 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-583801:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (5.485006795s)
	I1126 20:52:53.851513  228196 kic.go:203] duration metric: took 5.485150996s to extract preloaded images to volume ...
	W1126 20:52:53.851647  228196 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1126 20:52:53.851744  228196 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1126 20:52:53.958728  228196 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-583801 --name newest-cni-583801 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-583801 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-583801 --network newest-cni-583801 --ip 192.168.85.2 --volume newest-cni-583801:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1126 20:52:54.424102  228196 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Running}}
	I1126 20:52:54.450705  228196 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:52:54.478341  228196 cli_runner.go:164] Run: docker exec newest-cni-583801 stat /var/lib/dpkg/alternatives/iptables
	I1126 20:52:54.548378  228196 oci.go:144] the created container "newest-cni-583801" has a running status.
	I1126 20:52:54.548409  228196 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa...
	I1126 20:52:54.872794  228196 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1126 20:52:54.905583  228196 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:52:54.936252  228196 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1126 20:52:54.936277  228196 kic_runner.go:114] Args: [docker exec --privileged newest-cni-583801 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1126 20:52:54.996060  228196 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:52:55.024867  228196 machine.go:94] provisionDockerMachine start ...
	I1126 20:52:55.024967  228196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:52:55.048845  228196 main.go:143] libmachine: Using SSH client type: native
	I1126 20:52:55.049192  228196 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1126 20:52:55.049209  228196 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:52:55.049877  228196 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42498->127.0.0.1:33083: read: connection reset by peer
	I1126 20:52:56.164454  226403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.698889421s)
	I1126 20:52:57.690517  226403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.1899368s)
	I1126 20:52:57.690576  226403 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.152484046s)
	I1126 20:52:57.690597  226403 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-538119" to be "Ready" ...
	I1126 20:52:57.690870  226403 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.433876157s)
	I1126 20:52:57.693676  226403 node_ready.go:49] node "default-k8s-diff-port-538119" is "Ready"
	I1126 20:52:57.693764  226403 node_ready.go:38] duration metric: took 3.153657ms for node "default-k8s-diff-port-538119" to be "Ready" ...
	I1126 20:52:57.693796  226403 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:52:57.693857  226403 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-538119 addons enable metrics-server
	
	I1126 20:52:57.694178  226403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:52:57.699889  226403 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1126 20:52:57.702726  226403 addons.go:530] duration metric: took 7.698792004s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1126 20:52:57.713811  226403 api_server.go:72] duration metric: took 7.710238133s to wait for apiserver process to appear ...
	I1126 20:52:57.713844  226403 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:52:57.713873  226403 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1126 20:52:57.722307  226403 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1126 20:52:57.723379  226403 api_server.go:141] control plane version: v1.34.1
	I1126 20:52:57.723437  226403 api_server.go:131] duration metric: took 9.578096ms to wait for apiserver health ...
	I1126 20:52:57.723472  226403 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:52:57.727171  226403 system_pods.go:59] 8 kube-system pods found
	I1126 20:52:57.727249  226403 system_pods.go:61] "coredns-66bc5c9577-whx45" [4c930cb6-3a88-453d-87b2-982b117252c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:52:57.727274  226403 system_pods.go:61] "etcd-default-k8s-diff-port-538119" [350b0a49-cb40-4e7e-979e-2603cd98f40a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:52:57.727314  226403 system_pods.go:61] "kindnet-ts8sn" [689c63b4-0698-4849-b955-38da30ca9d27] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:52:57.727339  226403 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-538119" [1075acc3-91b8-413d-8236-1458b8b2f755] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:52:57.727364  226403 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-538119" [9b1dc77b-3053-45d5-9c72-f9f755941068] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:52:57.727392  226403 system_pods.go:61] "kube-proxy-sp5l4" [fe1ccf23-f465-4b93-b09e-c5a07258326f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:52:57.727422  226403 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-538119" [641a56bf-2138-4b46-b797-b787b49f2505] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:52:57.727457  226403 system_pods.go:61] "storage-provisioner" [c2af4292-99c1-4828-a90f-f165d964345f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:52:57.727481  226403 system_pods.go:74] duration metric: took 3.988981ms to wait for pod list to return data ...
	I1126 20:52:57.727502  226403 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:52:57.731226  226403 default_sa.go:45] found service account: "default"
	I1126 20:52:57.731300  226403 default_sa.go:55] duration metric: took 3.764773ms for default service account to be created ...
	I1126 20:52:57.731324  226403 system_pods.go:116] waiting for k8s-apps to be running ...
	I1126 20:52:57.736130  226403 system_pods.go:86] 8 kube-system pods found
	I1126 20:52:57.736210  226403 system_pods.go:89] "coredns-66bc5c9577-whx45" [4c930cb6-3a88-453d-87b2-982b117252c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1126 20:52:57.736249  226403 system_pods.go:89] "etcd-default-k8s-diff-port-538119" [350b0a49-cb40-4e7e-979e-2603cd98f40a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:52:57.736298  226403 system_pods.go:89] "kindnet-ts8sn" [689c63b4-0698-4849-b955-38da30ca9d27] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:52:57.736332  226403 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-538119" [1075acc3-91b8-413d-8236-1458b8b2f755] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:52:57.736361  226403 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-538119" [9b1dc77b-3053-45d5-9c72-f9f755941068] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:52:57.736407  226403 system_pods.go:89] "kube-proxy-sp5l4" [fe1ccf23-f465-4b93-b09e-c5a07258326f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1126 20:52:57.736432  226403 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-538119" [641a56bf-2138-4b46-b797-b787b49f2505] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:52:57.736451  226403 system_pods.go:89] "storage-provisioner" [c2af4292-99c1-4828-a90f-f165d964345f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1126 20:52:57.736474  226403 system_pods.go:126] duration metric: took 5.131282ms to wait for k8s-apps to be running ...
	I1126 20:52:57.736509  226403 system_svc.go:44] waiting for kubelet service to be running ....
	I1126 20:52:57.736582  226403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:52:57.750930  226403 system_svc.go:56] duration metric: took 14.414322ms WaitForService to wait for kubelet
	I1126 20:52:57.750970  226403 kubeadm.go:587] duration metric: took 7.747412933s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:52:57.750989  226403 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:52:57.754218  226403 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:52:57.754260  226403 node_conditions.go:123] node cpu capacity is 2
	I1126 20:52:57.754274  226403 node_conditions.go:105] duration metric: took 3.278355ms to run NodePressure ...
	I1126 20:52:57.754287  226403 start.go:242] waiting for startup goroutines ...
	I1126 20:52:57.754294  226403 start.go:247] waiting for cluster config update ...
	I1126 20:52:57.754309  226403 start.go:256] writing updated cluster config ...
	I1126 20:52:57.754630  226403 ssh_runner.go:195] Run: rm -f paused
	I1126 20:52:57.758430  226403 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1126 20:52:57.761879  226403 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-whx45" in "kube-system" namespace to be "Ready" or be gone ...
	W1126 20:52:59.770333  226403 pod_ready.go:104] pod "coredns-66bc5c9577-whx45" is not "Ready", error: <nil>
	I1126 20:52:58.229596  228196 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-583801
	
	I1126 20:52:58.229623  228196 ubuntu.go:182] provisioning hostname "newest-cni-583801"
	I1126 20:52:58.229701  228196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:52:58.255584  228196 main.go:143] libmachine: Using SSH client type: native
	I1126 20:52:58.255895  228196 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1126 20:52:58.255905  228196 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-583801 && echo "newest-cni-583801" | sudo tee /etc/hostname
	I1126 20:52:58.420129  228196 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-583801
	
	I1126 20:52:58.420215  228196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:52:58.439236  228196 main.go:143] libmachine: Using SSH client type: native
	I1126 20:52:58.439544  228196 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1126 20:52:58.439564  228196 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-583801' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-583801/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-583801' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:52:58.590153  228196 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:52:58.590188  228196 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:52:58.590208  228196 ubuntu.go:190] setting up certificates
	I1126 20:52:58.590218  228196 provision.go:84] configureAuth start
	I1126 20:52:58.590290  228196 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-583801
	I1126 20:52:58.608577  228196 provision.go:143] copyHostCerts
	I1126 20:52:58.608643  228196 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:52:58.608660  228196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:52:58.608750  228196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:52:58.608843  228196 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:52:58.608856  228196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:52:58.608883  228196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:52:58.608962  228196 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:52:58.608970  228196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:52:58.608999  228196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:52:58.609051  228196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.newest-cni-583801 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-583801]
	I1126 20:52:58.709333  228196 provision.go:177] copyRemoteCerts
	I1126 20:52:58.709402  228196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:52:58.709448  228196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:52:58.731189  228196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:52:58.833668  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:52:58.853291  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:52:58.875454  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:52:58.893808  228196 provision.go:87] duration metric: took 303.566356ms to configureAuth
	I1126 20:52:58.893838  228196 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:52:58.894046  228196 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:52:58.894150  228196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:52:58.912144  228196 main.go:143] libmachine: Using SSH client type: native
	I1126 20:52:58.912460  228196 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1126 20:52:58.912479  228196 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:52:59.219919  228196 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:52:59.219949  228196 machine.go:97] duration metric: took 4.195049509s to provisionDockerMachine
	I1126 20:52:59.219960  228196 client.go:176] duration metric: took 11.612634049s to LocalClient.Create
	I1126 20:52:59.219978  228196 start.go:167] duration metric: took 11.612682532s to libmachine.API.Create "newest-cni-583801"
	I1126 20:52:59.219987  228196 start.go:293] postStartSetup for "newest-cni-583801" (driver="docker")
	I1126 20:52:59.219997  228196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:52:59.220055  228196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:52:59.220095  228196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:52:59.238642  228196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:52:59.341903  228196 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:52:59.345469  228196 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:52:59.345499  228196 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:52:59.345512  228196 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:52:59.345565  228196 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:52:59.345652  228196 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:52:59.345761  228196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:52:59.353403  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:52:59.384038  228196 start.go:296] duration metric: took 164.036985ms for postStartSetup
	I1126 20:52:59.384526  228196 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-583801
	I1126 20:52:59.424209  228196 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/config.json ...
	I1126 20:52:59.424508  228196 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:52:59.424548  228196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:52:59.463694  228196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:52:59.575224  228196 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:52:59.584258  228196 start.go:128] duration metric: took 11.980552626s to createHost
	I1126 20:52:59.584285  228196 start.go:83] releasing machines lock for "newest-cni-583801", held for 11.980673771s
	I1126 20:52:59.584352  228196 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-583801
	I1126 20:52:59.619645  228196 ssh_runner.go:195] Run: cat /version.json
	I1126 20:52:59.619705  228196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:52:59.622359  228196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:52:59.622456  228196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:52:59.646131  228196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:52:59.658084  228196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:52:59.757566  228196 ssh_runner.go:195] Run: systemctl --version
	I1126 20:52:59.855437  228196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:52:59.915247  228196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:52:59.920467  228196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:52:59.920568  228196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:52:59.950845  228196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1126 20:52:59.950871  228196 start.go:496] detecting cgroup driver to use...
	I1126 20:52:59.950921  228196 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:52:59.951001  228196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:52:59.975045  228196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:52:59.995390  228196 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:52:59.995455  228196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:53:00.040477  228196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:53:00.140432  228196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:53:00.456346  228196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:53:00.602983  228196 docker.go:234] disabling docker service ...
	I1126 20:53:00.603047  228196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:53:00.639329  228196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:53:00.658948  228196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:53:00.831861  228196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:53:00.953419  228196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:53:00.965808  228196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:53:00.981553  228196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:53:00.981667  228196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:00.996000  228196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:53:00.996126  228196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:01.010094  228196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:01.019449  228196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:01.030401  228196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:53:01.039036  228196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:01.048364  228196 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:01.062500  228196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:01.077249  228196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:53:01.087312  228196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:53:01.094889  228196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:53:01.254355  228196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:53:01.477466  228196 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:53:01.477528  228196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:53:01.482086  228196 start.go:564] Will wait 60s for crictl version
	I1126 20:53:01.482144  228196 ssh_runner.go:195] Run: which crictl
	I1126 20:53:01.486041  228196 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:53:01.512798  228196 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:53:01.512918  228196 ssh_runner.go:195] Run: crio --version
	I1126 20:53:01.551868  228196 ssh_runner.go:195] Run: crio --version
	I1126 20:53:01.593818  228196 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:53:01.596826  228196 cli_runner.go:164] Run: docker network inspect newest-cni-583801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:53:01.622972  228196 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:53:01.627095  228196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:53:01.640772  228196 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1126 20:53:01.643676  228196 kubeadm.go:884] updating cluster {Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:53:01.643815  228196 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:53:01.643897  228196 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:53:01.703368  228196 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:53:01.703392  228196 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:53:01.703444  228196 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:53:01.742077  228196 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:53:01.742102  228196 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:53:01.742110  228196 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1126 20:53:01.742198  228196 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-583801 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:53:01.742280  228196 ssh_runner.go:195] Run: crio config
	I1126 20:53:01.842144  228196 cni.go:84] Creating CNI manager for ""
	I1126 20:53:01.842217  228196 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:53:01.842255  228196 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1126 20:53:01.842309  228196 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-583801 NodeName:newest-cni-583801 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:53:01.842482  228196 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-583801"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:53:01.842587  228196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:53:01.855870  228196 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:53:01.856009  228196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:53:01.865794  228196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1126 20:53:01.881312  228196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:53:01.899724  228196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1126 20:53:01.916941  228196 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:53:01.923189  228196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:53:01.935261  228196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:53:02.112105  228196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:53:02.128602  228196 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801 for IP: 192.168.85.2
	I1126 20:53:02.128710  228196 certs.go:195] generating shared ca certs ...
	I1126 20:53:02.128749  228196 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:02.129053  228196 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:53:02.129155  228196 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:53:02.129194  228196 certs.go:257] generating profile certs ...
	I1126 20:53:02.129294  228196 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/client.key
	I1126 20:53:02.129377  228196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/client.crt with IP's: []
	I1126 20:53:02.818581  228196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/client.crt ...
	I1126 20:53:02.818654  228196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/client.crt: {Name:mk7fff7b2e4adbb3aa9ac47b7df65a4c1f648e99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:02.818877  228196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/client.key ...
	I1126 20:53:02.818891  228196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/client.key: {Name:mkae15a143f8482a2a05b6030bfc80fafd118bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:02.818975  228196 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.key.ec6d08a2
	I1126 20:53:02.818988  228196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.crt.ec6d08a2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1126 20:53:02.874800  228196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.crt.ec6d08a2 ...
	I1126 20:53:02.874862  228196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.crt.ec6d08a2: {Name:mk6d97c6ae243d7cfa4d1961a05f6e79eaaca950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:02.875054  228196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.key.ec6d08a2 ...
	I1126 20:53:02.875091  228196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.key.ec6d08a2: {Name:mk0a4cbd75b95eaea93eb4e24fa7a0fe5e5a085c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:02.875221  228196 certs.go:382] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.crt.ec6d08a2 -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.crt
	I1126 20:53:02.875352  228196 certs.go:386] copying /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.key.ec6d08a2 -> /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.key
	I1126 20:53:02.875440  228196 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.key
	I1126 20:53:02.875490  228196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.crt with IP's: []
	I1126 20:53:03.253242  228196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.crt ...
	I1126 20:53:03.253314  228196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.crt: {Name:mk212de9280473e5be1280d050c3af4172df9aca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:03.253502  228196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.key ...
	I1126 20:53:03.253537  228196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.key: {Name:mk073882bf1aa4166fb9cd8ef5feeea0002e7aef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:03.253768  228196 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:53:03.253840  228196 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:53:03.253914  228196 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:53:03.253987  228196 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:53:03.254041  228196 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:53:03.254098  228196 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:53:03.254175  228196 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:53:03.254764  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:53:03.280741  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:53:03.299777  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:53:03.319456  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:53:03.340911  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1126 20:53:03.363559  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:53:03.383759  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:53:03.403588  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:53:03.425164  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:53:03.444562  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:53:03.463032  228196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:53:03.481865  228196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:53:03.495819  228196 ssh_runner.go:195] Run: openssl version
	I1126 20:53:03.502864  228196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:53:03.511449  228196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:53:03.516194  228196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:53:03.516278  228196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:53:03.560313  228196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:53:03.569361  228196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:53:03.578340  228196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:53:03.582967  228196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:53:03.583132  228196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:53:03.634292  228196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:53:03.644594  228196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:53:03.655229  228196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:53:03.659433  228196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:53:03.659497  228196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:53:03.709630  228196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:53:03.727871  228196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:53:03.735304  228196 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1126 20:53:03.735401  228196 kubeadm.go:401] StartCluster: {Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:53:03.735561  228196 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:53:03.735667  228196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:53:03.805275  228196 cri.go:89] found id: ""
	I1126 20:53:03.805393  228196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:53:03.815825  228196 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1126 20:53:03.824357  228196 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1126 20:53:03.824499  228196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1126 20:53:03.835376  228196 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1126 20:53:03.835448  228196 kubeadm.go:158] found existing configuration files:
	
	I1126 20:53:03.835538  228196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1126 20:53:03.844364  228196 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1126 20:53:03.844481  228196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1126 20:53:03.852504  228196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1126 20:53:03.861315  228196 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1126 20:53:03.861426  228196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1126 20:53:03.869410  228196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1126 20:53:03.878422  228196 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1126 20:53:03.878565  228196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1126 20:53:03.886734  228196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1126 20:53:03.895553  228196 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1126 20:53:03.895667  228196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1126 20:53:03.903517  228196 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1126 20:53:03.953627  228196 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1126 20:53:03.954060  228196 kubeadm.go:319] [preflight] Running pre-flight checks
	I1126 20:53:03.988005  228196 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1126 20:53:03.988158  228196 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1126 20:53:03.988236  228196 kubeadm.go:319] OS: Linux
	I1126 20:53:03.988319  228196 kubeadm.go:319] CGROUPS_CPU: enabled
	I1126 20:53:03.988395  228196 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1126 20:53:03.988479  228196 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1126 20:53:03.988562  228196 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1126 20:53:03.988649  228196 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1126 20:53:03.988769  228196 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1126 20:53:03.988828  228196 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1126 20:53:03.988881  228196 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1126 20:53:03.988931  228196 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1126 20:53:04.067556  228196 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1126 20:53:04.067739  228196 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1126 20:53:04.067861  228196 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1126 20:53:04.082293  228196 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1126 20:53:02.270665  226403 pod_ready.go:104] pod "coredns-66bc5c9577-whx45" is not "Ready", error: <nil>
	W1126 20:53:04.768846  226403 pod_ready.go:104] pod "coredns-66bc5c9577-whx45" is not "Ready", error: <nil>
	I1126 20:53:04.088646  228196 out.go:252]   - Generating certificates and keys ...
	I1126 20:53:04.088817  228196 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1126 20:53:04.088938  228196 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1126 20:53:04.618266  228196 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1126 20:53:04.851733  228196 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1126 20:53:05.509771  228196 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1126 20:53:05.968811  228196 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1126 20:53:06.590017  228196 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1126 20:53:06.590615  228196 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-583801] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1126 20:53:07.024743  228196 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1126 20:53:07.025345  228196 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-583801] and IPs [192.168.85.2 127.0.0.1 ::1]
	W1126 20:53:07.269231  226403 pod_ready.go:104] pod "coredns-66bc5c9577-whx45" is not "Ready", error: <nil>
	W1126 20:53:09.768982  226403 pod_ready.go:104] pod "coredns-66bc5c9577-whx45" is not "Ready", error: <nil>
	I1126 20:53:07.836474  228196 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1126 20:53:08.377051  228196 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1126 20:53:09.658392  228196 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1126 20:53:09.659030  228196 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1126 20:53:10.133893  228196 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1126 20:53:10.605082  228196 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1126 20:53:10.790895  228196 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1126 20:53:11.546881  228196 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1126 20:53:12.325721  228196 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1126 20:53:12.325912  228196 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1126 20:53:12.326277  228196 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1126 20:53:11.770280  226403 pod_ready.go:104] pod "coredns-66bc5c9577-whx45" is not "Ready", error: <nil>
	W1126 20:53:14.267927  226403 pod_ready.go:104] pod "coredns-66bc5c9577-whx45" is not "Ready", error: <nil>
	I1126 20:53:12.331497  228196 out.go:252]   - Booting up control plane ...
	I1126 20:53:12.331670  228196 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1126 20:53:12.331784  228196 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1126 20:53:12.332636  228196 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1126 20:53:12.355201  228196 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1126 20:53:12.355385  228196 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1126 20:53:12.364866  228196 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1126 20:53:12.365057  228196 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1126 20:53:12.365137  228196 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1126 20:53:12.532644  228196 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1126 20:53:12.532871  228196 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1126 20:53:14.033310  228196 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501365608s
	I1126 20:53:14.037678  228196 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1126 20:53:14.037912  228196 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1126 20:53:14.038047  228196 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1126 20:53:14.038154  228196 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1126 20:53:16.286934  228196 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.247952253s
	I1126 20:53:17.908860  228196 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.870369886s
	I1126 20:53:19.540631  228196 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501906397s
	I1126 20:53:19.563163  228196 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1126 20:53:19.581791  228196 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1126 20:53:19.605575  228196 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1126 20:53:19.605775  228196 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-583801 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1126 20:53:19.618306  228196 kubeadm.go:319] [bootstrap-token] Using token: j4fl04.rshfbz3if5ztb4ic
	W1126 20:53:16.767996  226403 pod_ready.go:104] pod "coredns-66bc5c9577-whx45" is not "Ready", error: <nil>
	W1126 20:53:19.267726  226403 pod_ready.go:104] pod "coredns-66bc5c9577-whx45" is not "Ready", error: <nil>
	I1126 20:53:19.621219  228196 out.go:252]   - Configuring RBAC rules ...
	I1126 20:53:19.621345  228196 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1126 20:53:19.625603  228196 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1126 20:53:19.635812  228196 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1126 20:53:19.640190  228196 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1126 20:53:19.645101  228196 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1126 20:53:19.650996  228196 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1126 20:53:19.948809  228196 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1126 20:53:20.384783  228196 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1126 20:53:20.951231  228196 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1126 20:53:20.952517  228196 kubeadm.go:319] 
	I1126 20:53:20.952598  228196 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1126 20:53:20.952610  228196 kubeadm.go:319] 
	I1126 20:53:20.952688  228196 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1126 20:53:20.952696  228196 kubeadm.go:319] 
	I1126 20:53:20.952730  228196 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1126 20:53:20.952799  228196 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1126 20:53:20.952855  228196 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1126 20:53:20.952863  228196 kubeadm.go:319] 
	I1126 20:53:20.952919  228196 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1126 20:53:20.952926  228196 kubeadm.go:319] 
	I1126 20:53:20.952979  228196 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1126 20:53:20.952986  228196 kubeadm.go:319] 
	I1126 20:53:20.953040  228196 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1126 20:53:20.953119  228196 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1126 20:53:20.953191  228196 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1126 20:53:20.953198  228196 kubeadm.go:319] 
	I1126 20:53:20.953286  228196 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1126 20:53:20.953371  228196 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1126 20:53:20.953379  228196 kubeadm.go:319] 
	I1126 20:53:20.953464  228196 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j4fl04.rshfbz3if5ztb4ic \
	I1126 20:53:20.953573  228196 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a \
	I1126 20:53:20.953599  228196 kubeadm.go:319] 	--control-plane 
	I1126 20:53:20.953605  228196 kubeadm.go:319] 
	I1126 20:53:20.953691  228196 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1126 20:53:20.953697  228196 kubeadm.go:319] 
	I1126 20:53:20.953802  228196 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j4fl04.rshfbz3if5ztb4ic \
	I1126 20:53:20.953964  228196 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:70a69e680d3c56e0bc3067abb6e31dd3934bcef010390788fb62cdb860f2e95a 
	I1126 20:53:20.958370  228196 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1126 20:53:20.958702  228196 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1126 20:53:20.958828  228196 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1126 20:53:20.958853  228196 cni.go:84] Creating CNI manager for ""
	I1126 20:53:20.958861  228196 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:53:20.962097  228196 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1126 20:53:20.965162  228196 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1126 20:53:20.969532  228196 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1126 20:53:20.969551  228196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1126 20:53:20.983517  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1126 20:53:21.335285  228196 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1126 20:53:21.335417  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:53:21.335494  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-583801 minikube.k8s.io/updated_at=2025_11_26T20_53_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970 minikube.k8s.io/name=newest-cni-583801 minikube.k8s.io/primary=true
	I1126 20:53:21.499670  228196 ops.go:34] apiserver oom_adj: -16
	I1126 20:53:21.499775  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:53:22.000635  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1126 20:53:21.767558  226403 pod_ready.go:104] pod "coredns-66bc5c9577-whx45" is not "Ready", error: <nil>
	W1126 20:53:23.768318  226403 pod_ready.go:104] pod "coredns-66bc5c9577-whx45" is not "Ready", error: <nil>
	I1126 20:53:22.500599  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:53:23.000721  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:53:23.500446  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:53:23.999937  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:53:24.499973  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:53:24.999974  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:53:25.499847  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:53:25.999899  228196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1126 20:53:26.145954  228196 kubeadm.go:1114] duration metric: took 4.810583734s to wait for elevateKubeSystemPrivileges
	I1126 20:53:26.145982  228196 kubeadm.go:403] duration metric: took 22.410585547s to StartCluster
	I1126 20:53:26.145998  228196 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:26.146057  228196 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:53:26.147035  228196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:26.147252  228196 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:53:26.147357  228196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1126 20:53:26.147631  228196 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:26.147670  228196 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:53:26.147733  228196 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-583801"
	I1126 20:53:26.147762  228196 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-583801"
	I1126 20:53:26.147786  228196 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:26.148349  228196 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:26.148677  228196 addons.go:70] Setting default-storageclass=true in profile "newest-cni-583801"
	I1126 20:53:26.148706  228196 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-583801"
	I1126 20:53:26.148950  228196 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:26.150665  228196 out.go:179] * Verifying Kubernetes components...
	I1126 20:53:26.153669  228196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:53:26.213896  228196 addons.go:239] Setting addon default-storageclass=true in "newest-cni-583801"
	I1126 20:53:26.218195  228196 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:26.218651  228196 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:26.220225  228196 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:53:26.224260  228196 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:53:26.224284  228196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:53:26.224344  228196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:26.255434  228196 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:53:26.255454  228196 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:53:26.255514  228196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:26.289201  228196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:26.307376  228196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:26.624644  228196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1126 20:53:26.624783  228196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:53:26.658121  228196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:53:26.694375  228196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:53:27.261587  228196 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:53:27.261650  228196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:53:27.261743  228196 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1126 20:53:27.528335  228196 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1126 20:53:27.528506  228196 api_server.go:72] duration metric: took 1.381222026s to wait for apiserver process to appear ...
	I1126 20:53:27.528685  228196 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:53:27.528736  228196 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:53:27.531776  228196 addons.go:530] duration metric: took 1.38409927s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1126 20:53:27.548931  228196 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1126 20:53:27.550742  228196 api_server.go:141] control plane version: v1.34.1
	I1126 20:53:27.550767  228196 api_server.go:131] duration metric: took 22.043376ms to wait for apiserver health ...
	I1126 20:53:27.550777  228196 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:53:27.569003  228196 system_pods.go:59] 8 kube-system pods found
	I1126 20:53:27.569041  228196 system_pods.go:61] "coredns-66bc5c9577-jgvmh" [120d7cde-44e6-4b70-a084-5dc9aedb43a1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1126 20:53:27.569049  228196 system_pods.go:61] "etcd-newest-cni-583801" [008f5999-344a-4440-9a40-e1cbef7e635a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:53:27.569056  228196 system_pods.go:61] "kindnet-sbsft" [86669a04-b137-4030-a081-e29138539712] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1126 20:53:27.569063  228196 system_pods.go:61] "kube-apiserver-newest-cni-583801" [4a7b65d1-3d49-4c9c-b7e2-c7710ef418b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:53:27.569068  228196 system_pods.go:61] "kube-controller-manager-newest-cni-583801" [9e395a3d-9368-41db-8671-6d9e20ec9c53] Running
	I1126 20:53:27.569072  228196 system_pods.go:61] "kube-proxy-gjz2x" [b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8] Running
	I1126 20:53:27.569076  228196 system_pods.go:61] "kube-scheduler-newest-cni-583801" [ddeb5080-621e-4014-815b-06844437b467] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:53:27.569081  228196 system_pods.go:61] "storage-provisioner" [99891d85-c274-44a1-b73d-7c21c77d320c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1126 20:53:27.569087  228196 system_pods.go:74] duration metric: took 18.304127ms to wait for pod list to return data ...
	I1126 20:53:27.569096  228196 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:53:27.577990  228196 default_sa.go:45] found service account: "default"
	I1126 20:53:27.578013  228196 default_sa.go:55] duration metric: took 8.911295ms for default service account to be created ...
	I1126 20:53:27.578030  228196 kubeadm.go:587] duration metric: took 1.430744103s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:53:27.578045  228196 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:53:27.585967  228196 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:53:27.586056  228196 node_conditions.go:123] node cpu capacity is 2
	I1126 20:53:27.586085  228196 node_conditions.go:105] duration metric: took 8.033782ms to run NodePressure ...
	I1126 20:53:27.586126  228196 start.go:242] waiting for startup goroutines ...
	I1126 20:53:27.766913  228196 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-583801" context rescaled to 1 replicas
	I1126 20:53:27.766983  228196 start.go:247] waiting for cluster config update ...
	I1126 20:53:27.767009  228196 start.go:256] writing updated cluster config ...
	I1126 20:53:27.767364  228196 ssh_runner.go:195] Run: rm -f paused
	I1126 20:53:27.834547  228196 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 20:53:27.837817  228196 out.go:179] * Done! kubectl is now configured to use "newest-cni-583801" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.23125364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.231840315Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-gjz2x/POD" id=b5679573-c108-447d-b8d3-f0d71dbc8d0b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.231906044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.268067566Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5fe5127c-1acb-47e1-b99f-a187511f7330 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.281501111Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b5679573-c108-447d-b8d3-f0d71dbc8d0b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.290933072Z" level=info msg="Ran pod sandbox 305dce6441ed2292e75672fcba4dfea2bd9e851a0c38af9fae6d5b544ed2eace with infra container: kube-system/kindnet-sbsft/POD" id=5fe5127c-1acb-47e1-b99f-a187511f7330 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.296936101Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a597e012-e1b4-4ad6-ace6-2bc6ac0dd449 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.307788633Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=50e65dae-08a7-4eb3-a68f-1a72ce1f6e73 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.322930495Z" level=info msg="Ran pod sandbox 44b4ed8927a58882beb3ae7a40bc5d43fc544842d19175dd451ab99c89ea63f0 with infra container: kube-system/kube-proxy-gjz2x/POD" id=b5679573-c108-447d-b8d3-f0d71dbc8d0b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.330547869Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ed1337a0-2c6d-460e-bfeb-242560b4d5c4 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.331448225Z" level=info msg="Creating container: kube-system/kindnet-sbsft/kindnet-cni" id=0e0b6c1c-aaf0-47da-8c88-07c54757e7c9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.332024127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.340792164Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7ecf2724-d6fc-4216-acec-e4844cb1cf88 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.366754255Z" level=info msg="Creating container: kube-system/kube-proxy-gjz2x/kube-proxy" id=0aa673fc-8de5-49b6-a51e-c64e496d6ae6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.366888823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.370481476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.395795372Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.410092261Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.41588088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.531257563Z" level=info msg="Created container a6002dd6cec9461074f004de0917a831ac05f4ae1786bfe390c723fdae3af59c: kube-system/kindnet-sbsft/kindnet-cni" id=0e0b6c1c-aaf0-47da-8c88-07c54757e7c9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.538258865Z" level=info msg="Starting container: a6002dd6cec9461074f004de0917a831ac05f4ae1786bfe390c723fdae3af59c" id=a1481cf2-6ac2-46de-88e8-3ec0235b511e name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.542312304Z" level=info msg="Started container" PID=1442 containerID=a6002dd6cec9461074f004de0917a831ac05f4ae1786bfe390c723fdae3af59c description=kube-system/kindnet-sbsft/kindnet-cni id=a1481cf2-6ac2-46de-88e8-3ec0235b511e name=/runtime.v1.RuntimeService/StartContainer sandboxID=305dce6441ed2292e75672fcba4dfea2bd9e851a0c38af9fae6d5b544ed2eace
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.556055995Z" level=info msg="Created container 9db98ee15c197266253080b537ff7a522fc060772d53af45dde961c3bfef617d: kube-system/kube-proxy-gjz2x/kube-proxy" id=0aa673fc-8de5-49b6-a51e-c64e496d6ae6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.557726224Z" level=info msg="Starting container: 9db98ee15c197266253080b537ff7a522fc060772d53af45dde961c3bfef617d" id=9e81a7a2-2b6d-4057-a285-4d88782b839d name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:53:26 newest-cni-583801 crio[839]: time="2025-11-26T20:53:26.568311744Z" level=info msg="Started container" PID=1451 containerID=9db98ee15c197266253080b537ff7a522fc060772d53af45dde961c3bfef617d description=kube-system/kube-proxy-gjz2x/kube-proxy id=9e81a7a2-2b6d-4057-a285-4d88782b839d name=/runtime.v1.RuntimeService/StartContainer sandboxID=44b4ed8927a58882beb3ae7a40bc5d43fc544842d19175dd451ab99c89ea63f0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9db98ee15c197       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   44b4ed8927a58       kube-proxy-gjz2x                            kube-system
	a6002dd6cec94       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   305dce6441ed2       kindnet-sbsft                               kube-system
	b7701bad4c75d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   08362c04542e3       etcd-newest-cni-583801                      kube-system
	f2ef0c7d47e02       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   a3b9320b97a90       kube-scheduler-newest-cni-583801            kube-system
	4803045db5d00       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   f98e61cd82d0d       kube-controller-manager-newest-cni-583801   kube-system
	1371cb61dd5b8       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   b91424e3d24fa       kube-apiserver-newest-cni-583801            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-583801
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-583801
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=newest-cni-583801
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_53_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:53:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-583801
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:53:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:53:20 +0000   Wed, 26 Nov 2025 20:53:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:53:20 +0000   Wed, 26 Nov 2025 20:53:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:53:20 +0000   Wed, 26 Nov 2025 20:53:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 26 Nov 2025 20:53:20 +0000   Wed, 26 Nov 2025 20:53:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-583801
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                e506ba8d-2f72-4740-8ae9-08bb604d173a
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-583801                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-sbsft                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-583801             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-583801    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-proxy-gjz2x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-583801             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x8 over 16s)  kubelet          Node newest-cni-583801 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-583801 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 16s)  kubelet          Node newest-cni-583801 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-583801 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-583801 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-583801 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-583801 event: Registered Node newest-cni-583801 in Controller
	
	
	==> dmesg <==
	[ +15.481333] overlayfs: idmapped layers are currently not supported
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	[Nov26 20:48] overlayfs: idmapped layers are currently not supported
	[Nov26 20:49] overlayfs: idmapped layers are currently not supported
	[Nov26 20:50] overlayfs: idmapped layers are currently not supported
	[Nov26 20:51] overlayfs: idmapped layers are currently not supported
	[ +24.066506] overlayfs: idmapped layers are currently not supported
	[Nov26 20:52] overlayfs: idmapped layers are currently not supported
	[Nov26 20:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b7701bad4c75dc7785411fe71d05b534f15e1232036f41cf4005059fb0158ebe] <==
	{"level":"warn","ts":"2025-11-26T20:53:16.513618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.533670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.550888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.566437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.584263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.603883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.621664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.654413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.665793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.693992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.721990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.740725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.759871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.782310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.795094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.818682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.833539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.848771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.875818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.891770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.916740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.938719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.954853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:16.982312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:17.085102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:53:29 up  1:35,  0 user,  load average: 3.93, 3.44, 2.71
	Linux newest-cni-583801 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a6002dd6cec9461074f004de0917a831ac05f4ae1786bfe390c723fdae3af59c] <==
	I1126 20:53:26.648459       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:53:26.726842       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:53:26.731431       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:53:26.731458       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:53:26.731471       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:53:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:53:26.931013       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:53:26.931055       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:53:26.931065       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:53:26.932082       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [1371cb61dd5b82ef7e200c59d837e386e9c280b8f0cef0f810cac3755018a131] <==
	I1126 20:53:17.948031       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1126 20:53:17.948095       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:53:17.948108       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:53:17.948115       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:53:17.948121       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:53:17.949161       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1126 20:53:17.967134       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:53:17.984221       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:53:18.634993       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1126 20:53:18.640858       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1126 20:53:18.640942       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:53:19.426650       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:53:19.484176       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:53:19.544460       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1126 20:53:19.570542       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1126 20:53:19.571892       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:53:19.578466       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:53:19.878235       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:53:20.357908       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:53:20.383143       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1126 20:53:20.397014       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1126 20:53:25.236496       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:53:25.244738       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:53:25.781581       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:53:25.881121       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4803045db5d00cea1ff1bfb43968565da95e7922d18582ab3fa768273e1e983d] <==
	I1126 20:53:24.926638       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:53:24.926719       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:53:24.926793       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 20:53:24.927993       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 20:53:24.928030       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 20:53:24.929041       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:53:24.929085       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1126 20:53:24.929112       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1126 20:53:24.929134       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:53:24.929269       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:53:24.929885       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:53:24.930628       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:53:24.936012       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1126 20:53:24.940295       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:53:24.943817       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:53:24.943838       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:53:24.943846       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1126 20:53:24.974827       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1126 20:53:24.974931       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1126 20:53:24.975006       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-583801"
	I1126 20:53:24.975047       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1126 20:53:24.977420       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:53:24.977429       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1126 20:53:24.977445       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:53:24.977458       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	
	
	==> kube-proxy [9db98ee15c197266253080b537ff7a522fc060772d53af45dde961c3bfef617d] <==
	I1126 20:53:26.668899       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:53:26.783561       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:53:26.884185       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:53:26.884227       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1126 20:53:26.884312       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:53:26.987949       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:53:26.988006       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:53:26.992723       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:53:26.993165       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:53:26.993183       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:53:26.994280       1 config.go:200] "Starting service config controller"
	I1126 20:53:26.994295       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:53:26.998044       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:53:26.998061       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:53:26.998083       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:53:26.998087       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:53:27.006098       1 config.go:309] "Starting node config controller"
	I1126 20:53:27.014923       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:53:27.014977       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:53:27.096155       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:53:27.099553       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:53:27.099599       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f2ef0c7d47e02d64b42ec3ef7bad3e8b779402017ff406459b5241b0aa0d66db] <==
	E1126 20:53:17.907982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1126 20:53:17.909237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 20:53:17.909402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1126 20:53:17.913168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1126 20:53:17.913305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1126 20:53:17.913504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1126 20:53:17.914165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1126 20:53:17.914292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1126 20:53:17.914654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:53:17.916004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 20:53:17.916078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1126 20:53:17.916719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 20:53:18.725195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1126 20:53:18.725275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1126 20:53:18.760627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1126 20:53:18.781092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1126 20:53:18.812039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1126 20:53:18.827822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1126 20:53:18.921732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1126 20:53:19.043671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1126 20:53:19.087761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1126 20:53:19.113561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1126 20:53:19.128694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1126 20:53:19.140195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1126 20:53:21.390438       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:53:21 newest-cni-583801 kubelet[1298]: I1126 20:53:21.282687    1298 apiserver.go:52] "Watching apiserver"
	Nov 26 20:53:21 newest-cni-583801 kubelet[1298]: I1126 20:53:21.328541    1298 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 26 20:53:21 newest-cni-583801 kubelet[1298]: I1126 20:53:21.437979    1298 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-583801"
	Nov 26 20:53:21 newest-cni-583801 kubelet[1298]: I1126 20:53:21.438323    1298 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-583801"
	Nov 26 20:53:21 newest-cni-583801 kubelet[1298]: I1126 20:53:21.438614    1298 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-583801"
	Nov 26 20:53:21 newest-cni-583801 kubelet[1298]: E1126 20:53:21.454192    1298 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-583801\" already exists" pod="kube-system/etcd-newest-cni-583801"
	Nov 26 20:53:21 newest-cni-583801 kubelet[1298]: E1126 20:53:21.455746    1298 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-583801\" already exists" pod="kube-system/kube-apiserver-newest-cni-583801"
	Nov 26 20:53:21 newest-cni-583801 kubelet[1298]: E1126 20:53:21.458576    1298 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-583801\" already exists" pod="kube-system/kube-scheduler-newest-cni-583801"
	Nov 26 20:53:21 newest-cni-583801 kubelet[1298]: I1126 20:53:21.480969    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-583801" podStartSLOduration=1.48095318 podStartE2EDuration="1.48095318s" podCreationTimestamp="2025-11-26 20:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:53:21.480671317 +0000 UTC m=+1.280439725" watchObservedRunningTime="2025-11-26 20:53:21.48095318 +0000 UTC m=+1.280721580"
	Nov 26 20:53:21 newest-cni-583801 kubelet[1298]: I1126 20:53:21.514221    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-583801" podStartSLOduration=2.514197294 podStartE2EDuration="2.514197294s" podCreationTimestamp="2025-11-26 20:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:53:21.499999931 +0000 UTC m=+1.299768339" watchObservedRunningTime="2025-11-26 20:53:21.514197294 +0000 UTC m=+1.313965686"
	Nov 26 20:53:21 newest-cni-583801 kubelet[1298]: I1126 20:53:21.514469    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-583801" podStartSLOduration=1.514458652 podStartE2EDuration="1.514458652s" podCreationTimestamp="2025-11-26 20:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:53:21.514055464 +0000 UTC m=+1.313823864" watchObservedRunningTime="2025-11-26 20:53:21.514458652 +0000 UTC m=+1.314227044"
	Nov 26 20:53:21 newest-cni-583801 kubelet[1298]: I1126 20:53:21.554930    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-583801" podStartSLOduration=1.5549119390000001 podStartE2EDuration="1.554911939s" podCreationTimestamp="2025-11-26 20:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:53:21.535412623 +0000 UTC m=+1.335181023" watchObservedRunningTime="2025-11-26 20:53:21.554911939 +0000 UTC m=+1.354680331"
	Nov 26 20:53:24 newest-cni-583801 kubelet[1298]: I1126 20:53:24.903694    1298 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 26 20:53:24 newest-cni-583801 kubelet[1298]: I1126 20:53:24.904335    1298 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 26 20:53:25 newest-cni-583801 kubelet[1298]: I1126 20:53:25.963360    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/86669a04-b137-4030-a081-e29138539712-cni-cfg\") pod \"kindnet-sbsft\" (UID: \"86669a04-b137-4030-a081-e29138539712\") " pod="kube-system/kindnet-sbsft"
	Nov 26 20:53:25 newest-cni-583801 kubelet[1298]: I1126 20:53:25.963420    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8-lib-modules\") pod \"kube-proxy-gjz2x\" (UID: \"b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8\") " pod="kube-system/kube-proxy-gjz2x"
	Nov 26 20:53:25 newest-cni-583801 kubelet[1298]: I1126 20:53:25.963455    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbk2\" (UniqueName: \"kubernetes.io/projected/b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8-kube-api-access-5cbk2\") pod \"kube-proxy-gjz2x\" (UID: \"b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8\") " pod="kube-system/kube-proxy-gjz2x"
	Nov 26 20:53:25 newest-cni-583801 kubelet[1298]: I1126 20:53:25.963510    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86669a04-b137-4030-a081-e29138539712-xtables-lock\") pod \"kindnet-sbsft\" (UID: \"86669a04-b137-4030-a081-e29138539712\") " pod="kube-system/kindnet-sbsft"
	Nov 26 20:53:25 newest-cni-583801 kubelet[1298]: I1126 20:53:25.963532    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86669a04-b137-4030-a081-e29138539712-lib-modules\") pod \"kindnet-sbsft\" (UID: \"86669a04-b137-4030-a081-e29138539712\") " pod="kube-system/kindnet-sbsft"
	Nov 26 20:53:25 newest-cni-583801 kubelet[1298]: I1126 20:53:25.963551    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qk66\" (UniqueName: \"kubernetes.io/projected/86669a04-b137-4030-a081-e29138539712-kube-api-access-8qk66\") pod \"kindnet-sbsft\" (UID: \"86669a04-b137-4030-a081-e29138539712\") " pod="kube-system/kindnet-sbsft"
	Nov 26 20:53:25 newest-cni-583801 kubelet[1298]: I1126 20:53:25.963568    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8-xtables-lock\") pod \"kube-proxy-gjz2x\" (UID: \"b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8\") " pod="kube-system/kube-proxy-gjz2x"
	Nov 26 20:53:25 newest-cni-583801 kubelet[1298]: I1126 20:53:25.963596    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8-kube-proxy\") pod \"kube-proxy-gjz2x\" (UID: \"b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8\") " pod="kube-system/kube-proxy-gjz2x"
	Nov 26 20:53:26 newest-cni-583801 kubelet[1298]: I1126 20:53:26.083579    1298 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 26 20:53:27 newest-cni-583801 kubelet[1298]: I1126 20:53:27.559916    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gjz2x" podStartSLOduration=2.559895299 podStartE2EDuration="2.559895299s" podCreationTimestamp="2025-11-26 20:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:53:27.529851076 +0000 UTC m=+7.329619484" watchObservedRunningTime="2025-11-26 20:53:27.559895299 +0000 UTC m=+7.359663699"
	Nov 26 20:53:29 newest-cni-583801 kubelet[1298]: I1126 20:53:29.489738    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-sbsft" podStartSLOduration=4.489719337 podStartE2EDuration="4.489719337s" podCreationTimestamp="2025-11-26 20:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 20:53:27.583717242 +0000 UTC m=+7.383485633" watchObservedRunningTime="2025-11-26 20:53:29.489719337 +0000 UTC m=+9.289487737"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-583801 -n newest-cni-583801
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-583801 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-jgvmh storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-583801 describe pod coredns-66bc5c9577-jgvmh storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-583801 describe pod coredns-66bc5c9577-jgvmh storage-provisioner: exit status 1 (84.74693ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-jgvmh" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-583801 describe pod coredns-66bc5c9577-jgvmh storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-538119 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-538119 --alsologtostderr -v=1: exit status 80 (2.746313301s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-538119 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:53:42.794690  233983 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:53:42.794890  233983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:53:42.794903  233983 out.go:374] Setting ErrFile to fd 2...
	I1126 20:53:42.794909  233983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:53:42.795199  233983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:53:42.795467  233983 out.go:368] Setting JSON to false
	I1126 20:53:42.795516  233983 mustload.go:66] Loading cluster: default-k8s-diff-port-538119
	I1126 20:53:42.795956  233983 config.go:182] Loaded profile config "default-k8s-diff-port-538119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:42.796470  233983 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-538119 --format={{.State.Status}}
	I1126 20:53:42.833714  233983 host.go:66] Checking if "default-k8s-diff-port-538119" exists ...
	I1126 20:53:42.834087  233983 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:53:42.921219  233983 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-26 20:53:42.911769696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:53:42.921878  233983 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-538119 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1126 20:53:42.925445  233983 out.go:179] * Pausing node default-k8s-diff-port-538119 ... 
	I1126 20:53:42.931942  233983 host.go:66] Checking if "default-k8s-diff-port-538119" exists ...
	I1126 20:53:42.932299  233983 ssh_runner.go:195] Run: systemctl --version
	I1126 20:53:42.932356  233983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-538119
	I1126 20:53:42.959181  233983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/default-k8s-diff-port-538119/id_rsa Username:docker}
	I1126 20:53:43.074268  233983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:53:43.118479  233983 pause.go:52] kubelet running: true
	I1126 20:53:43.118548  233983 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:53:43.513979  233983 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:53:43.514056  233983 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:53:43.635227  233983 cri.go:89] found id: "37d358b29691acefbe7a5309e329f27200aa8514dd0f7f283352c3b4cd48c2a1"
	I1126 20:53:43.635252  233983 cri.go:89] found id: "bbc4ffa86f03ffd0b7f32a69952e54fd4a11931def215ade5a35c91e6997fa4d"
	I1126 20:53:43.635257  233983 cri.go:89] found id: "1451c264cad0b3e134f425cba27c32de088a5ac1e0f20d19dcac5bb5fac0b13d"
	I1126 20:53:43.635261  233983 cri.go:89] found id: "9edd7747a1eb77ffab56dbbfa69d70a61e1dc6edec2dbb9c8873ad6e848517d0"
	I1126 20:53:43.635264  233983 cri.go:89] found id: "d6cd6ce6790b4b0fda712fb3190ae2bd302a3535807ba5a84ec859b03d974194"
	I1126 20:53:43.635268  233983 cri.go:89] found id: "ebea4280eb674478aadbae605d2061b7c068854e5d7ec7d5b4fb24f16fe0cfb9"
	I1126 20:53:43.635270  233983 cri.go:89] found id: "fc58d11ea93321e33cff7333a94130c39e21c09f52f801603b1a6a3a6ad98d31"
	I1126 20:53:43.635299  233983 cri.go:89] found id: "220d1f4d36b36e980115005c48030f8c1bcbf01b34d094b15f89d89ca0ae205f"
	I1126 20:53:43.635310  233983 cri.go:89] found id: "192c4461955e12aeca35caebeb96aaa6b7c140e0c20bce5b442625309d73063a"
	I1126 20:53:43.635317  233983 cri.go:89] found id: "985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e"
	I1126 20:53:43.635320  233983 cri.go:89] found id: "ebf08cb7657a6ca910fdbb8f925d3bb2d31f344e7692e636ce0c0a3e75654569"
	I1126 20:53:43.635323  233983 cri.go:89] found id: ""
	I1126 20:53:43.635390  233983 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:53:43.652856  233983 retry.go:31] will retry after 275.648104ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:53:43Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:53:43.929358  233983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:53:43.950589  233983 pause.go:52] kubelet running: false
	I1126 20:53:43.950678  233983 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:53:44.246335  233983 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:53:44.246452  233983 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:53:44.407496  233983 cri.go:89] found id: "37d358b29691acefbe7a5309e329f27200aa8514dd0f7f283352c3b4cd48c2a1"
	I1126 20:53:44.407529  233983 cri.go:89] found id: "bbc4ffa86f03ffd0b7f32a69952e54fd4a11931def215ade5a35c91e6997fa4d"
	I1126 20:53:44.407535  233983 cri.go:89] found id: "1451c264cad0b3e134f425cba27c32de088a5ac1e0f20d19dcac5bb5fac0b13d"
	I1126 20:53:44.407538  233983 cri.go:89] found id: "9edd7747a1eb77ffab56dbbfa69d70a61e1dc6edec2dbb9c8873ad6e848517d0"
	I1126 20:53:44.407542  233983 cri.go:89] found id: "d6cd6ce6790b4b0fda712fb3190ae2bd302a3535807ba5a84ec859b03d974194"
	I1126 20:53:44.407546  233983 cri.go:89] found id: "ebea4280eb674478aadbae605d2061b7c068854e5d7ec7d5b4fb24f16fe0cfb9"
	I1126 20:53:44.407549  233983 cri.go:89] found id: "fc58d11ea93321e33cff7333a94130c39e21c09f52f801603b1a6a3a6ad98d31"
	I1126 20:53:44.407568  233983 cri.go:89] found id: "220d1f4d36b36e980115005c48030f8c1bcbf01b34d094b15f89d89ca0ae205f"
	I1126 20:53:44.407588  233983 cri.go:89] found id: "192c4461955e12aeca35caebeb96aaa6b7c140e0c20bce5b442625309d73063a"
	I1126 20:53:44.407602  233983 cri.go:89] found id: "985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e"
	I1126 20:53:44.407610  233983 cri.go:89] found id: "ebf08cb7657a6ca910fdbb8f925d3bb2d31f344e7692e636ce0c0a3e75654569"
	I1126 20:53:44.407613  233983 cri.go:89] found id: ""
	I1126 20:53:44.407675  233983 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:53:44.425079  233983 retry.go:31] will retry after 475.938066ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:53:44Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:53:44.901788  233983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:53:44.921177  233983 pause.go:52] kubelet running: false
	I1126 20:53:44.921291  233983 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:53:45.265389  233983 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:53:45.265507  233983 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:53:45.418844  233983 cri.go:89] found id: "37d358b29691acefbe7a5309e329f27200aa8514dd0f7f283352c3b4cd48c2a1"
	I1126 20:53:45.418869  233983 cri.go:89] found id: "bbc4ffa86f03ffd0b7f32a69952e54fd4a11931def215ade5a35c91e6997fa4d"
	I1126 20:53:45.418874  233983 cri.go:89] found id: "1451c264cad0b3e134f425cba27c32de088a5ac1e0f20d19dcac5bb5fac0b13d"
	I1126 20:53:45.418878  233983 cri.go:89] found id: "9edd7747a1eb77ffab56dbbfa69d70a61e1dc6edec2dbb9c8873ad6e848517d0"
	I1126 20:53:45.418890  233983 cri.go:89] found id: "d6cd6ce6790b4b0fda712fb3190ae2bd302a3535807ba5a84ec859b03d974194"
	I1126 20:53:45.418941  233983 cri.go:89] found id: "ebea4280eb674478aadbae605d2061b7c068854e5d7ec7d5b4fb24f16fe0cfb9"
	I1126 20:53:45.418954  233983 cri.go:89] found id: "fc58d11ea93321e33cff7333a94130c39e21c09f52f801603b1a6a3a6ad98d31"
	I1126 20:53:45.418959  233983 cri.go:89] found id: "220d1f4d36b36e980115005c48030f8c1bcbf01b34d094b15f89d89ca0ae205f"
	I1126 20:53:45.418963  233983 cri.go:89] found id: "192c4461955e12aeca35caebeb96aaa6b7c140e0c20bce5b442625309d73063a"
	I1126 20:53:45.418970  233983 cri.go:89] found id: "985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e"
	I1126 20:53:45.418979  233983 cri.go:89] found id: "ebf08cb7657a6ca910fdbb8f925d3bb2d31f344e7692e636ce0c0a3e75654569"
	I1126 20:53:45.418997  233983 cri.go:89] found id: ""
	I1126 20:53:45.419081  233983 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:53:45.446732  233983 out.go:203] 
	W1126 20:53:45.449809  233983 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:53:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:53:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 20:53:45.449955  233983 out.go:285] * 
	* 
	W1126 20:53:45.456139  233983 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 20:53:45.459398  233983 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-538119 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-538119
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-538119:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45",
	        "Created": "2025-11-26T20:51:00.643686103Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 226590,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:52:40.96898825Z",
	            "FinishedAt": "2025-11-26T20:52:39.998828588Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/hostname",
	        "HostsPath": "/var/lib/docker/containers/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/hosts",
	        "LogPath": "/var/lib/docker/containers/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45-json.log",
	        "Name": "/default-k8s-diff-port-538119",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-538119:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-538119",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45",
	                "LowerDir": "/var/lib/docker/overlay2/1fa0634dae07369695cdbc978c5931db6f7285748bd04ee866489bb21cee8f25-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fa0634dae07369695cdbc978c5931db6f7285748bd04ee866489bb21cee8f25/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fa0634dae07369695cdbc978c5931db6f7285748bd04ee866489bb21cee8f25/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fa0634dae07369695cdbc978c5931db6f7285748bd04ee866489bb21cee8f25/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-538119",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-538119/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-538119",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-538119",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-538119",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b6ac046f9dfa6460679f8f70f9bb70ea6ab78f2110f1c360751b6ccb655e792e",
	            "SandboxKey": "/var/run/docker/netns/b6ac046f9dfa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-538119": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:2c:17:37:f3:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58099cffa65b0cb809ecb55668d778b1399828737559d8aaf8663745e845c3ba",
	                    "EndpointID": "89a4060e2fde7c1d15a94683e6d901522b7b0ff5fbe5ec71d630e68387a78e9f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-538119",
	                        "0376b85fe7a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119: exit status 2 (524.003269ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-538119 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-538119 logs -n 25: (1.951019314s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-956694 image list --format=json                                                                                                                                                                                                    │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ pause   │ -p no-preload-956694 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │                     │
	│ delete  │ -p no-preload-956694                                                                                                                                                                                                                          │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p no-preload-956694                                                                                                                                                                                                                          │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p disable-driver-mounts-180932                                                                                                                                                                                                               │ disable-driver-mounts-180932 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-616586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │                     │
	│ stop    │ -p embed-certs-616586 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-616586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538119 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ image   │ embed-certs-616586 image list --format=json                                                                                                                                                                                                   │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ pause   │ -p embed-certs-616586 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:53 UTC │
	│ delete  │ -p embed-certs-616586                                                                                                                                                                                                                         │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ delete  │ -p embed-certs-616586                                                                                                                                                                                                                         │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ start   │ -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-583801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	│ stop    │ -p newest-cni-583801 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-583801 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ start   │ -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	│ image   │ default-k8s-diff-port-538119 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ pause   │ -p default-k8s-diff-port-538119 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:53:32
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:53:32.058363  232430 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:53:32.058691  232430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:53:32.058726  232430 out.go:374] Setting ErrFile to fd 2...
	I1126 20:53:32.058747  232430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:53:32.059051  232430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:53:32.059474  232430 out.go:368] Setting JSON to false
	I1126 20:53:32.060471  232430 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5742,"bootTime":1764184670,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:53:32.060576  232430 start.go:143] virtualization:  
	I1126 20:53:32.063909  232430 out.go:179] * [newest-cni-583801] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:53:32.067898  232430 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:53:32.067980  232430 notify.go:221] Checking for updates...
	I1126 20:53:32.074206  232430 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:53:32.077087  232430 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:53:32.080000  232430 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:53:32.083011  232430 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:53:32.086000  232430 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:53:32.089464  232430 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:32.090230  232430 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:53:32.123576  232430 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:53:32.123684  232430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:53:32.181819  232430 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:53:32.171614062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:53:32.181945  232430 docker.go:319] overlay module found
	I1126 20:53:32.185115  232430 out.go:179] * Using the docker driver based on existing profile
	I1126 20:53:32.187998  232430 start.go:309] selected driver: docker
	I1126 20:53:32.188016  232430 start.go:927] validating driver "docker" against &{Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:53:32.188123  232430 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:53:32.188873  232430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:53:32.247743  232430 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:53:32.237861309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:53:32.248097  232430 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:53:32.248130  232430 cni.go:84] Creating CNI manager for ""
	I1126 20:53:32.248192  232430 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:53:32.248235  232430 start.go:353] cluster config:
	{Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:53:32.253258  232430 out.go:179] * Starting "newest-cni-583801" primary control-plane node in "newest-cni-583801" cluster
	I1126 20:53:32.256177  232430 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:53:32.259057  232430 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:53:32.262071  232430 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:53:32.262125  232430 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:53:32.262125  232430 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:53:32.262135  232430 cache.go:65] Caching tarball of preloaded images
	I1126 20:53:32.262351  232430 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:53:32.262363  232430 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:53:32.262584  232430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/config.json ...
	I1126 20:53:32.282185  232430 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:53:32.282208  232430 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:53:32.282228  232430 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:53:32.282258  232430 start.go:360] acquireMachinesLock for newest-cni-583801: {Name:mk5a5c4e74106a93e4d595458226ad93568e2c2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:53:32.282328  232430 start.go:364] duration metric: took 46.324µs to acquireMachinesLock for "newest-cni-583801"
	I1126 20:53:32.282350  232430 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:53:32.282356  232430 fix.go:54] fixHost starting: 
	I1126 20:53:32.282629  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:32.299954  232430 fix.go:112] recreateIfNeeded on newest-cni-583801: state=Stopped err=<nil>
	W1126 20:53:32.299985  232430 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:53:32.303287  232430 out.go:252] * Restarting existing docker container for "newest-cni-583801" ...
	I1126 20:53:32.303379  232430 cli_runner.go:164] Run: docker start newest-cni-583801
	I1126 20:53:32.554974  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:32.585873  232430 kic.go:430] container "newest-cni-583801" state is running.
	I1126 20:53:32.586285  232430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-583801
	I1126 20:53:32.606597  232430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/config.json ...
	I1126 20:53:32.606821  232430 machine.go:94] provisionDockerMachine start ...
	I1126 20:53:32.606878  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:32.629439  232430 main.go:143] libmachine: Using SSH client type: native
	I1126 20:53:32.630040  232430 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:53:32.630056  232430 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:53:32.630649  232430 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40338->127.0.0.1:33088: read: connection reset by peer
	I1126 20:53:35.790036  232430 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-583801
	
	I1126 20:53:35.790065  232430 ubuntu.go:182] provisioning hostname "newest-cni-583801"
	I1126 20:53:35.790129  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:35.808513  232430 main.go:143] libmachine: Using SSH client type: native
	I1126 20:53:35.808921  232430 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:53:35.808938  232430 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-583801 && echo "newest-cni-583801" | sudo tee /etc/hostname
	I1126 20:53:35.968272  232430 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-583801
	
	I1126 20:53:35.968372  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:35.985281  232430 main.go:143] libmachine: Using SSH client type: native
	I1126 20:53:35.985588  232430 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:53:35.985608  232430 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-583801' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-583801/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-583801' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:53:36.134065  232430 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:53:36.134089  232430 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:53:36.134110  232430 ubuntu.go:190] setting up certificates
	I1126 20:53:36.134120  232430 provision.go:84] configureAuth start
	I1126 20:53:36.134186  232430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-583801
	I1126 20:53:36.150568  232430 provision.go:143] copyHostCerts
	I1126 20:53:36.150637  232430 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:53:36.150656  232430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:53:36.150733  232430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:53:36.150850  232430 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:53:36.150861  232430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:53:36.150889  232430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:53:36.150959  232430 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:53:36.150968  232430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:53:36.150995  232430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:53:36.151056  232430 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.newest-cni-583801 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-583801]
	I1126 20:53:36.403502  232430 provision.go:177] copyRemoteCerts
	I1126 20:53:36.403577  232430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:53:36.403620  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:36.421644  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:36.529798  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:53:36.550564  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:53:36.568751  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:53:36.587015  232430 provision.go:87] duration metric: took 452.872031ms to configureAuth
	I1126 20:53:36.587084  232430 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:53:36.587333  232430 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:36.587487  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:36.607919  232430 main.go:143] libmachine: Using SSH client type: native
	I1126 20:53:36.608234  232430 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:53:36.608248  232430 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:53:36.958850  232430 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:53:36.958868  232430 machine.go:97] duration metric: took 4.352038468s to provisionDockerMachine
	I1126 20:53:36.958880  232430 start.go:293] postStartSetup for "newest-cni-583801" (driver="docker")
	I1126 20:53:36.958891  232430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:53:36.958970  232430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:53:36.959007  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:36.981192  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:37.093758  232430 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:53:37.097114  232430 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:53:37.097141  232430 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:53:37.097152  232430 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:53:37.097210  232430 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:53:37.097285  232430 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:53:37.097387  232430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:53:37.104454  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:53:37.123849  232430 start.go:296] duration metric: took 164.954962ms for postStartSetup
	I1126 20:53:37.123942  232430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:53:37.123986  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:37.154074  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:37.255041  232430 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:53:37.259591  232430 fix.go:56] duration metric: took 4.977229179s for fixHost
	I1126 20:53:37.259622  232430 start.go:83] releasing machines lock for "newest-cni-583801", held for 4.977273748s
	I1126 20:53:37.259685  232430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-583801
	I1126 20:53:37.277023  232430 ssh_runner.go:195] Run: cat /version.json
	I1126 20:53:37.277072  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:37.277368  232430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:53:37.277419  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:37.295698  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:37.296359  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:37.397677  232430 ssh_runner.go:195] Run: systemctl --version
	I1126 20:53:37.512657  232430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:53:37.548900  232430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:53:37.554052  232430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:53:37.554156  232430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:53:37.562330  232430 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:53:37.562356  232430 start.go:496] detecting cgroup driver to use...
	I1126 20:53:37.562394  232430 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:53:37.562446  232430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:53:37.577492  232430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:53:37.593094  232430 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:53:37.593215  232430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:53:37.611076  232430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:53:37.624211  232430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:53:37.766892  232430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:53:37.889373  232430 docker.go:234] disabling docker service ...
	I1126 20:53:37.889436  232430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:53:37.904993  232430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:53:37.918557  232430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:53:38.036692  232430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:53:38.159223  232430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:53:38.173000  232430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:53:38.190826  232430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:53:38.190920  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.200346  232430 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:53:38.200425  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.209865  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.220265  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.229629  232430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:53:38.238736  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.249532  232430 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.258061  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.267114  232430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:53:38.274404  232430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:53:38.281917  232430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:53:38.418686  232430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:53:38.600747  232430 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:53:38.600819  232430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:53:38.604608  232430 start.go:564] Will wait 60s for crictl version
	I1126 20:53:38.604739  232430 ssh_runner.go:195] Run: which crictl
	I1126 20:53:38.608219  232430 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:53:38.636623  232430 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:53:38.636773  232430 ssh_runner.go:195] Run: crio --version
	I1126 20:53:38.665852  232430 ssh_runner.go:195] Run: crio --version
	I1126 20:53:38.696090  232430 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:53:38.698804  232430 cli_runner.go:164] Run: docker network inspect newest-cni-583801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:53:38.715485  232430 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:53:38.719345  232430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:53:38.731937  232430 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1126 20:53:38.734654  232430 kubeadm.go:884] updating cluster {Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:53:38.734808  232430 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:53:38.734877  232430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:53:38.768850  232430 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:53:38.768875  232430 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:53:38.768939  232430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:53:38.793625  232430 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:53:38.793649  232430 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:53:38.793658  232430 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1126 20:53:38.793759  232430 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-583801 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:53:38.793882  232430 ssh_runner.go:195] Run: crio config
	I1126 20:53:38.865073  232430 cni.go:84] Creating CNI manager for ""
	I1126 20:53:38.865138  232430 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:53:38.865169  232430 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1126 20:53:38.865220  232430 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-583801 NodeName:newest-cni-583801 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:53:38.865412  232430 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-583801"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:53:38.865499  232430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:53:38.874386  232430 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:53:38.874499  232430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:53:38.882476  232430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1126 20:53:38.895474  232430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:53:38.908580  232430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1126 20:53:38.936361  232430 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:53:38.940405  232430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:53:38.950570  232430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:53:39.073766  232430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:53:39.090036  232430 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801 for IP: 192.168.85.2
	I1126 20:53:39.090056  232430 certs.go:195] generating shared ca certs ...
	I1126 20:53:39.090071  232430 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:39.090217  232430 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:53:39.090268  232430 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:53:39.090280  232430 certs.go:257] generating profile certs ...
	I1126 20:53:39.090371  232430 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/client.key
	I1126 20:53:39.090439  232430 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.key.ec6d08a2
	I1126 20:53:39.090482  232430 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.key
	I1126 20:53:39.090624  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:53:39.090669  232430 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:53:39.090687  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:53:39.090717  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:53:39.090746  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:53:39.090782  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:53:39.090834  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:53:39.091409  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:53:39.111085  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:53:39.131081  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:53:39.150675  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:53:39.175703  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1126 20:53:39.193732  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:53:39.212217  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:53:39.230737  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:53:39.255424  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:53:39.283103  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:53:39.303012  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:53:39.330502  232430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:53:39.344168  232430 ssh_runner.go:195] Run: openssl version
	I1126 20:53:39.352490  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:53:39.362130  232430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:53:39.365900  232430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:53:39.365990  232430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:53:39.407962  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:53:39.415762  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:53:39.423668  232430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:53:39.428006  232430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:53:39.428120  232430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:53:39.472113  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:53:39.480458  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:53:39.488646  232430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:53:39.492457  232430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:53:39.492531  232430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:53:39.534127  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:53:39.542196  232430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:53:39.545815  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:53:39.589091  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:53:39.632705  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:53:39.674810  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:53:39.720209  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:53:39.768920  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:53:39.821001  232430 kubeadm.go:401] StartCluster: {Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:53:39.821148  232430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:53:39.821224  232430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:53:39.882517  232430 cri.go:89] found id: ""
	I1126 20:53:39.882634  232430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:53:39.891828  232430 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:53:39.891896  232430 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:53:39.891974  232430 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:53:39.908470  232430 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:53:39.909115  232430 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-583801" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:53:39.909412  232430 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-2326/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-583801" cluster setting kubeconfig missing "newest-cni-583801" context setting]
	I1126 20:53:39.909916  232430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:39.911663  232430 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:53:39.927837  232430 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1126 20:53:39.927910  232430 kubeadm.go:602] duration metric: took 35.995984ms to restartPrimaryControlPlane
	I1126 20:53:39.927935  232430 kubeadm.go:403] duration metric: took 106.942909ms to StartCluster
	I1126 20:53:39.927978  232430 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:39.928065  232430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:53:39.929037  232430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:39.929289  232430 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:53:39.929688  232430 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:39.929683  232430 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:53:39.929835  232430 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-583801"
	I1126 20:53:39.929857  232430 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-583801"
	W1126 20:53:39.929864  232430 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:53:39.929886  232430 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:39.930468  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:39.931431  232430 addons.go:70] Setting dashboard=true in profile "newest-cni-583801"
	I1126 20:53:39.931456  232430 addons.go:239] Setting addon dashboard=true in "newest-cni-583801"
	W1126 20:53:39.931463  232430 addons.go:248] addon dashboard should already be in state true
	I1126 20:53:39.931487  232430 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:39.931941  232430 addons.go:70] Setting default-storageclass=true in profile "newest-cni-583801"
	I1126 20:53:39.932129  232430 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-583801"
	I1126 20:53:39.931948  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:39.935682  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:39.936069  232430 out.go:179] * Verifying Kubernetes components...
	I1126 20:53:39.946487  232430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:53:39.986467  232430 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:53:39.992014  232430 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:53:39.992038  232430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:53:39.992103  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:39.997474  232430 addons.go:239] Setting addon default-storageclass=true in "newest-cni-583801"
	W1126 20:53:39.997502  232430 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:53:39.997527  232430 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:39.999962  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:40.036086  232430 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:53:40.042265  232430 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:53:40.048152  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:53:40.048197  232430 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:53:40.048290  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:40.050104  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:40.063728  232430 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:53:40.063750  232430 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:53:40.063811  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:40.094270  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:40.107034  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:40.301165  232430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:53:40.310971  232430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:53:40.367032  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:53:40.367104  232430 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:53:40.417315  232430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:53:40.431253  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:53:40.431315  232430 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:53:40.499903  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:53:40.499968  232430 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:53:40.572044  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:53:40.572108  232430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:53:40.627747  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:53:40.627818  232430 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:53:40.670335  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:53:40.670406  232430 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:53:40.693837  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:53:40.693908  232430 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:53:40.718979  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:53:40.719049  232430 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:53:40.745578  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:53:40.745659  232430 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:53:40.766507  232430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Nov 26 20:53:34 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:34.860114567Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:34 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:34.867195088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:34 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:34.867925455Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:34 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:34.882465481Z" level=info msg="Created container 985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj/dashboard-metrics-scraper" id=e8acc919-e748-435f-80e5-36caf02f4cf3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:34 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:34.883180267Z" level=info msg="Starting container: 985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e" id=137f3beb-0d7e-412c-bbe9-9bad4eab5d61 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:53:34 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:34.888783365Z" level=info msg="Started container" PID=1665 containerID=985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj/dashboard-metrics-scraper id=137f3beb-0d7e-412c-bbe9-9bad4eab5d61 name=/runtime.v1.RuntimeService/StartContainer sandboxID=653ad756e8f2cbe7e6caf1a3f9888648498308c92c04ddf5a7113647713f2bd0
	Nov 26 20:53:34 default-k8s-diff-port-538119 conmon[1663]: conmon 985e2568eca0a4becf1e <ninfo>: container 1665 exited with status 1
	Nov 26 20:53:35 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:35.110527035Z" level=info msg="Removing container: 0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9" id=73e4e83a-dd97-4b06-a9c0-b426bbf6f7e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:53:35 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:35.122570508Z" level=info msg="Error loading conmon cgroup of container 0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9: cgroup deleted" id=73e4e83a-dd97-4b06-a9c0-b426bbf6f7e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:53:35 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:35.13183454Z" level=info msg="Removed container 0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj/dashboard-metrics-scraper" id=73e4e83a-dd97-4b06-a9c0-b426bbf6f7e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.667848359Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.675715Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.675883577Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.675966603Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.679598779Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.679749888Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.679821197Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.684385867Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.684550047Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.684629052Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.694212854Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.694377231Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.694452305Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.697638816Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.697789006Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	985e2568eca0a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   2                   653ad756e8f2c       dashboard-metrics-scraper-6ffb444bf9-l2zcj             kubernetes-dashboard
	37d358b29691a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   430f0fdadcb3d       storage-provisioner                                    kube-system
	ebf08cb7657a6       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago      Running             kubernetes-dashboard        0                   80ce7f6a547d8       kubernetes-dashboard-855c9754f9-rktgh                  kubernetes-dashboard
	bbc4ffa86f03f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago      Running             coredns                     1                   425b9113eb9c3       coredns-66bc5c9577-whx45                               kube-system
	43b904d03cdf0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   a99b4e0b6c797       busybox                                                default
	1451c264cad0b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   ec255c27ec93e       kindnet-ts8sn                                          kube-system
	9edd7747a1eb7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   430f0fdadcb3d       storage-provisioner                                    kube-system
	d6cd6ce6790b4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago      Running             kube-proxy                  1                   bf414567ff39b       kube-proxy-sp5l4                                       kube-system
	ebea4280eb674       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   d94b572b51fe8       kube-controller-manager-default-k8s-diff-port-538119   kube-system
	fc58d11ea9332       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   68c80761d551f       etcd-default-k8s-diff-port-538119                      kube-system
	220d1f4d36b36       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           57 seconds ago      Running             kube-apiserver              1                   689d0f1d37489       kube-apiserver-default-k8s-diff-port-538119            kube-system
	192c4461955e1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   bfff480c211ac       kube-scheduler-default-k8s-diff-port-538119            kube-system
	
	
	==> coredns [bbc4ffa86f03ffd0b7f32a69952e54fd4a11931def215ade5a35c91e6997fa4d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60250 - 6559 "HINFO IN 3437035850604151606.3091126414398460449. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017057241s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-538119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-538119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=default-k8s-diff-port-538119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_51_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:51:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-538119
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:53:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:53:26 +0000   Wed, 26 Nov 2025 20:51:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:53:26 +0000   Wed, 26 Nov 2025 20:51:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:53:26 +0000   Wed, 26 Nov 2025 20:51:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:53:26 +0000   Wed, 26 Nov 2025 20:52:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-538119
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                6b16bd81-d69e-4bbf-af91-d5d3d851d05d
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-whx45                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m14s
	  kube-system                 etcd-default-k8s-diff-port-538119                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m19s
	  kube-system                 kindnet-ts8sn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m14s
	  kube-system                 kube-apiserver-default-k8s-diff-port-538119             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-538119    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-proxy-sp5l4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-scheduler-default-k8s-diff-port-538119             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-l2zcj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rktgh                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m13s                  kube-proxy       
	  Normal   Starting                 49s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m29s (x8 over 2m29s)  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m29s (x8 over 2m29s)  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m29s (x8 over 2m29s)  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m19s                  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m19s                  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m19s                  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m19s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m15s                  node-controller  Node default-k8s-diff-port-538119 event: Registered Node default-k8s-diff-port-538119 in Controller
	  Normal   NodeReady                93s                    kubelet          Node default-k8s-diff-port-538119 status is now: NodeReady
	  Normal   Starting                 59s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node default-k8s-diff-port-538119 event: Registered Node default-k8s-diff-port-538119 in Controller
	
	
	==> dmesg <==
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	[Nov26 20:48] overlayfs: idmapped layers are currently not supported
	[Nov26 20:49] overlayfs: idmapped layers are currently not supported
	[Nov26 20:50] overlayfs: idmapped layers are currently not supported
	[Nov26 20:51] overlayfs: idmapped layers are currently not supported
	[ +24.066506] overlayfs: idmapped layers are currently not supported
	[Nov26 20:52] overlayfs: idmapped layers are currently not supported
	[Nov26 20:53] overlayfs: idmapped layers are currently not supported
	[ +25.622621] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fc58d11ea93321e33cff7333a94130c39e21c09f52f801603b1a6a3a6ad98d31] <==
	{"level":"warn","ts":"2025-11-26T20:52:54.170519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.206223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.226117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.258932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.260949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.278977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.305331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.330679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.343323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.378607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.391194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.413559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.441675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.466890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.496256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.538484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.553297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.598947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.631605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.661133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.723290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.756191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.777120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.805566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:55.021090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55288","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:53:47 up  1:35,  0 user,  load average: 3.49, 3.36, 2.70
	Linux default-k8s-diff-port-538119 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1451c264cad0b3e134f425cba27c32de088a5ac1e0f20d19dcac5bb5fac0b13d] <==
	I1126 20:52:57.470063       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:52:57.470257       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:52:57.470381       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:52:57.470393       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:52:57.470405       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:52:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:52:57.727972       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:52:57.728074       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:52:57.728110       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:52:57.728536       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:53:27.667073       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:53:27.728777       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 20:53:27.728777       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:53:27.728999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1126 20:53:29.129011       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:53:29.129057       1 metrics.go:72] Registering metrics
	I1126 20:53:29.129112       1 controller.go:711] "Syncing nftables rules"
	I1126 20:53:37.667480       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:53:37.667616       1 main.go:301] handling current node
	
	
	==> kube-apiserver [220d1f4d36b36e980115005c48030f8c1bcbf01b34d094b15f89d89ca0ae205f] <==
	I1126 20:52:55.994349       1 policy_source.go:240] refreshing policies
	I1126 20:52:55.995155       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:52:55.995172       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:52:55.995179       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:52:55.995185       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:52:56.017787       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:52:56.028457       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1126 20:52:56.028479       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1126 20:52:56.028649       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1126 20:52:56.028735       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:52:56.033836       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1126 20:52:56.051542       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:52:56.083749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1126 20:52:56.169293       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:52:56.635790       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:52:56.871494       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:52:57.126832       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:52:57.334055       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:52:57.438619       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:52:57.490059       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:52:57.619270       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.81.183"}
	I1126 20:52:57.645683       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.198.229"}
	I1126 20:52:59.876514       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:52:59.926464       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:52:59.980983       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ebea4280eb674478aadbae605d2061b7c068854e5d7ec7d5b4fb24f16fe0cfb9] <==
	I1126 20:52:59.499861       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:52:59.499861       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:52:59.500480       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:52:59.500844       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 20:52:59.509281       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:52:59.514022       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:52:59.517315       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:52:59.518512       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:52:59.518553       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:52:59.518578       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1126 20:52:59.518698       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:52:59.518759       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:52:59.519251       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:52:59.519522       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:52:59.520682       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 20:52:59.524495       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 20:52:59.551800       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:52:59.551891       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:52:59.551940       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:52:59.551950       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:52:59.551957       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:52:59.556130       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 20:52:59.569322       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:52:59.569346       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:52:59.569355       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [d6cd6ce6790b4b0fda712fb3190ae2bd302a3535807ba5a84ec859b03d974194] <==
	I1126 20:52:57.563267       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:52:57.700464       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:52:57.805110       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:52:57.805146       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1126 20:52:57.805264       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:52:57.840671       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:52:57.840795       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:52:57.844885       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:52:57.845363       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:52:57.845427       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:52:57.849376       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:52:57.849397       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:52:57.849682       1 config.go:200] "Starting service config controller"
	I1126 20:52:57.849697       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:52:57.850022       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:52:57.850034       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:52:57.850331       1 config.go:309] "Starting node config controller"
	I1126 20:52:57.850377       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:52:57.850405       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:52:57.950101       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:52:57.950109       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 20:52:57.950137       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [192c4461955e12aeca35caebeb96aaa6b7c140e0c20bce5b442625309d73063a] <==
	I1126 20:52:52.271405       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:52:55.890215       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:52:55.890249       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:52:55.890259       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:52:55.890269       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:52:56.048115       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:52:56.048152       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:52:56.083429       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:52:56.083559       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:52:56.083595       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:52:56.083614       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:52:56.189610       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:53:00 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:00.399055     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2k5d\" (UniqueName: \"kubernetes.io/projected/975abbcd-6e87-4996-aeef-10e9c652170b-kube-api-access-b2k5d\") pod \"kubernetes-dashboard-855c9754f9-rktgh\" (UID: \"975abbcd-6e87-4996-aeef-10e9c652170b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rktgh"
	Nov 26 20:53:00 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:00.399703     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/975abbcd-6e87-4996-aeef-10e9c652170b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rktgh\" (UID: \"975abbcd-6e87-4996-aeef-10e9c652170b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rktgh"
	Nov 26 20:53:00 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:00.399916     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-l2zcj\" (UID: \"d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj"
	Nov 26 20:53:00 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:00.400102     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrvw8\" (UniqueName: \"kubernetes.io/projected/d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4-kube-api-access-vrvw8\") pod \"dashboard-metrics-scraper-6ffb444bf9-l2zcj\" (UID: \"d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj"
	Nov 26 20:53:00 default-k8s-diff-port-538119 kubelet[782]: W1126 20:53:00.689302     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/crio-80ce7f6a547d819afc5551ebda5a6cfdee96aa96284cd9fd2565054f1c0807b5 WatchSource:0}: Error finding container 80ce7f6a547d819afc5551ebda5a6cfdee96aa96284cd9fd2565054f1c0807b5: Status 404 returned error can't find the container with id 80ce7f6a547d819afc5551ebda5a6cfdee96aa96284cd9fd2565054f1c0807b5
	Nov 26 20:53:00 default-k8s-diff-port-538119 kubelet[782]: W1126 20:53:00.724781     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/crio-653ad756e8f2cbe7e6caf1a3f9888648498308c92c04ddf5a7113647713f2bd0 WatchSource:0}: Error finding container 653ad756e8f2cbe7e6caf1a3f9888648498308c92c04ddf5a7113647713f2bd0: Status 404 returned error can't find the container with id 653ad756e8f2cbe7e6caf1a3f9888648498308c92c04ddf5a7113647713f2bd0
	Nov 26 20:53:08 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:08.061289     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rktgh" podStartSLOduration=1.449661976 podStartE2EDuration="8.061178627s" podCreationTimestamp="2025-11-26 20:53:00 +0000 UTC" firstStartedPulling="2025-11-26 20:53:00.692225084 +0000 UTC m=+12.137518420" lastFinishedPulling="2025-11-26 20:53:07.303741736 +0000 UTC m=+18.749035071" observedRunningTime="2025-11-26 20:53:08.060764025 +0000 UTC m=+19.506057369" watchObservedRunningTime="2025-11-26 20:53:08.061178627 +0000 UTC m=+19.506471963"
	Nov 26 20:53:14 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:14.043572     782 scope.go:117] "RemoveContainer" containerID="f11f49c214d4ed0c7934c1b1f8b7d2fe38c0ce44ed9be20a394365ebea6c33d0"
	Nov 26 20:53:15 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:15.048287     782 scope.go:117] "RemoveContainer" containerID="f11f49c214d4ed0c7934c1b1f8b7d2fe38c0ce44ed9be20a394365ebea6c33d0"
	Nov 26 20:53:15 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:15.048602     782 scope.go:117] "RemoveContainer" containerID="0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9"
	Nov 26 20:53:15 default-k8s-diff-port-538119 kubelet[782]: E1126 20:53:15.048757     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2zcj_kubernetes-dashboard(d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj" podUID="d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4"
	Nov 26 20:53:16 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:16.052464     782 scope.go:117] "RemoveContainer" containerID="0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9"
	Nov 26 20:53:16 default-k8s-diff-port-538119 kubelet[782]: E1126 20:53:16.052635     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2zcj_kubernetes-dashboard(d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj" podUID="d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4"
	Nov 26 20:53:20 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:20.644133     782 scope.go:117] "RemoveContainer" containerID="0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9"
	Nov 26 20:53:20 default-k8s-diff-port-538119 kubelet[782]: E1126 20:53:20.644333     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2zcj_kubernetes-dashboard(d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj" podUID="d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4"
	Nov 26 20:53:28 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:28.086111     782 scope.go:117] "RemoveContainer" containerID="9edd7747a1eb77ffab56dbbfa69d70a61e1dc6edec2dbb9c8873ad6e848517d0"
	Nov 26 20:53:34 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:34.856842     782 scope.go:117] "RemoveContainer" containerID="0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9"
	Nov 26 20:53:35 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:35.106229     782 scope.go:117] "RemoveContainer" containerID="0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9"
	Nov 26 20:53:35 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:35.109292     782 scope.go:117] "RemoveContainer" containerID="985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e"
	Nov 26 20:53:35 default-k8s-diff-port-538119 kubelet[782]: E1126 20:53:35.109483     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2zcj_kubernetes-dashboard(d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj" podUID="d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4"
	Nov 26 20:53:40 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:40.643385     782 scope.go:117] "RemoveContainer" containerID="985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e"
	Nov 26 20:53:40 default-k8s-diff-port-538119 kubelet[782]: E1126 20:53:40.643574     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2zcj_kubernetes-dashboard(d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj" podUID="d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4"
	Nov 26 20:53:43 default-k8s-diff-port-538119 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:53:43 default-k8s-diff-port-538119 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:53:43 default-k8s-diff-port-538119 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ebf08cb7657a6ca910fdbb8f925d3bb2d31f344e7692e636ce0c0a3e75654569] <==
	2025/11/26 20:53:07 Using namespace: kubernetes-dashboard
	2025/11/26 20:53:07 Using in-cluster config to connect to apiserver
	2025/11/26 20:53:07 Using secret token for csrf signing
	2025/11/26 20:53:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:53:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:53:07 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 20:53:07 Generating JWE encryption key
	2025/11/26 20:53:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:53:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:53:09 Initializing JWE encryption key from synchronized object
	2025/11/26 20:53:09 Creating in-cluster Sidecar client
	2025/11/26 20:53:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:53:09 Serving insecurely on HTTP port: 9090
	2025/11/26 20:53:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:53:07 Starting overwatch
	
	
	==> storage-provisioner [37d358b29691acefbe7a5309e329f27200aa8514dd0f7f283352c3b4cd48c2a1] <==
	I1126 20:53:28.155784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:53:28.169193       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:53:28.169303       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:53:28.178889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:31.633520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:35.897341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:39.496379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:42.557945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:45.583705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:45.595974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:53:45.596189       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:53:45.596410       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-538119_6b7e51c9-57ed-4396-a194-b95821c5a632!
	I1126 20:53:45.597356       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"73b04fd0-3ce6-4808-aac2-0c1574a9d61f", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-538119_6b7e51c9-57ed-4396-a194-b95821c5a632 became leader
	W1126 20:53:45.631315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:45.643398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:53:45.696727       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-538119_6b7e51c9-57ed-4396-a194-b95821c5a632!
	W1126 20:53:47.652051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:47.667810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9edd7747a1eb77ffab56dbbfa69d70a61e1dc6edec2dbb9c8873ad6e848517d0] <==
	I1126 20:52:57.353509       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:53:27.355255       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119: exit status 2 (538.095288ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-538119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-538119
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-538119:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45",
	        "Created": "2025-11-26T20:51:00.643686103Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 226590,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:52:40.96898825Z",
	            "FinishedAt": "2025-11-26T20:52:39.998828588Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/hostname",
	        "HostsPath": "/var/lib/docker/containers/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/hosts",
	        "LogPath": "/var/lib/docker/containers/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45-json.log",
	        "Name": "/default-k8s-diff-port-538119",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-538119:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-538119",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45",
	                "LowerDir": "/var/lib/docker/overlay2/1fa0634dae07369695cdbc978c5931db6f7285748bd04ee866489bb21cee8f25-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fa0634dae07369695cdbc978c5931db6f7285748bd04ee866489bb21cee8f25/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fa0634dae07369695cdbc978c5931db6f7285748bd04ee866489bb21cee8f25/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fa0634dae07369695cdbc978c5931db6f7285748bd04ee866489bb21cee8f25/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-538119",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-538119/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-538119",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-538119",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-538119",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b6ac046f9dfa6460679f8f70f9bb70ea6ab78f2110f1c360751b6ccb655e792e",
	            "SandboxKey": "/var/run/docker/netns/b6ac046f9dfa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-538119": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:2c:17:37:f3:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "58099cffa65b0cb809ecb55668d778b1399828737559d8aaf8663745e845c3ba",
	                    "EndpointID": "89a4060e2fde7c1d15a94683e6d901522b7b0ff5fbe5ec71d630e68387a78e9f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-538119",
	                        "0376b85fe7a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119: exit status 2 (498.187669ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-538119 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-538119 logs -n 25: (2.152049084s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-956694 image list --format=json                                                                                                                                                                                                    │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ pause   │ -p no-preload-956694 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │                     │
	│ delete  │ -p no-preload-956694                                                                                                                                                                                                                          │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p no-preload-956694                                                                                                                                                                                                                          │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p disable-driver-mounts-180932                                                                                                                                                                                                               │ disable-driver-mounts-180932 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-616586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │                     │
	│ stop    │ -p embed-certs-616586 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-616586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538119 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ image   │ embed-certs-616586 image list --format=json                                                                                                                                                                                                   │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ pause   │ -p embed-certs-616586 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:53 UTC │
	│ delete  │ -p embed-certs-616586                                                                                                                                                                                                                         │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ delete  │ -p embed-certs-616586                                                                                                                                                                                                                         │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ start   │ -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-583801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	│ stop    │ -p newest-cni-583801 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-583801 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ start   │ -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	│ image   │ default-k8s-diff-port-538119 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ pause   │ -p default-k8s-diff-port-538119 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:53:32
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:53:32.058363  232430 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:53:32.058691  232430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:53:32.058726  232430 out.go:374] Setting ErrFile to fd 2...
	I1126 20:53:32.058747  232430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:53:32.059051  232430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:53:32.059474  232430 out.go:368] Setting JSON to false
	I1126 20:53:32.060471  232430 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5742,"bootTime":1764184670,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:53:32.060576  232430 start.go:143] virtualization:  
	I1126 20:53:32.063909  232430 out.go:179] * [newest-cni-583801] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:53:32.067898  232430 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:53:32.067980  232430 notify.go:221] Checking for updates...
	I1126 20:53:32.074206  232430 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:53:32.077087  232430 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:53:32.080000  232430 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:53:32.083011  232430 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:53:32.086000  232430 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:53:32.089464  232430 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:32.090230  232430 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:53:32.123576  232430 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:53:32.123684  232430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:53:32.181819  232430 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:53:32.171614062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:53:32.181945  232430 docker.go:319] overlay module found
	I1126 20:53:32.185115  232430 out.go:179] * Using the docker driver based on existing profile
	I1126 20:53:32.187998  232430 start.go:309] selected driver: docker
	I1126 20:53:32.188016  232430 start.go:927] validating driver "docker" against &{Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:53:32.188123  232430 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:53:32.188873  232430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:53:32.247743  232430 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:53:32.237861309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:53:32.248097  232430 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:53:32.248130  232430 cni.go:84] Creating CNI manager for ""
	I1126 20:53:32.248192  232430 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:53:32.248235  232430 start.go:353] cluster config:
	{Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:53:32.253258  232430 out.go:179] * Starting "newest-cni-583801" primary control-plane node in "newest-cni-583801" cluster
	I1126 20:53:32.256177  232430 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:53:32.259057  232430 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:53:32.262071  232430 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:53:32.262125  232430 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:53:32.262125  232430 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:53:32.262135  232430 cache.go:65] Caching tarball of preloaded images
	I1126 20:53:32.262351  232430 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:53:32.262363  232430 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:53:32.262584  232430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/config.json ...
	I1126 20:53:32.282185  232430 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:53:32.282208  232430 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:53:32.282228  232430 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:53:32.282258  232430 start.go:360] acquireMachinesLock for newest-cni-583801: {Name:mk5a5c4e74106a93e4d595458226ad93568e2c2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:53:32.282328  232430 start.go:364] duration metric: took 46.324µs to acquireMachinesLock for "newest-cni-583801"
	I1126 20:53:32.282350  232430 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:53:32.282356  232430 fix.go:54] fixHost starting: 
	I1126 20:53:32.282629  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:32.299954  232430 fix.go:112] recreateIfNeeded on newest-cni-583801: state=Stopped err=<nil>
	W1126 20:53:32.299985  232430 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:53:32.303287  232430 out.go:252] * Restarting existing docker container for "newest-cni-583801" ...
	I1126 20:53:32.303379  232430 cli_runner.go:164] Run: docker start newest-cni-583801
	I1126 20:53:32.554974  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:32.585873  232430 kic.go:430] container "newest-cni-583801" state is running.
	I1126 20:53:32.586285  232430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-583801
	I1126 20:53:32.606597  232430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/config.json ...
	I1126 20:53:32.606821  232430 machine.go:94] provisionDockerMachine start ...
	I1126 20:53:32.606878  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:32.629439  232430 main.go:143] libmachine: Using SSH client type: native
	I1126 20:53:32.630040  232430 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:53:32.630056  232430 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:53:32.630649  232430 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40338->127.0.0.1:33088: read: connection reset by peer
	I1126 20:53:35.790036  232430 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-583801
	
	I1126 20:53:35.790065  232430 ubuntu.go:182] provisioning hostname "newest-cni-583801"
	I1126 20:53:35.790129  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:35.808513  232430 main.go:143] libmachine: Using SSH client type: native
	I1126 20:53:35.808921  232430 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:53:35.808938  232430 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-583801 && echo "newest-cni-583801" | sudo tee /etc/hostname
	I1126 20:53:35.968272  232430 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-583801
	
	I1126 20:53:35.968372  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:35.985281  232430 main.go:143] libmachine: Using SSH client type: native
	I1126 20:53:35.985588  232430 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:53:35.985608  232430 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-583801' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-583801/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-583801' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:53:36.134065  232430 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:53:36.134089  232430 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:53:36.134110  232430 ubuntu.go:190] setting up certificates
	I1126 20:53:36.134120  232430 provision.go:84] configureAuth start
	I1126 20:53:36.134186  232430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-583801
	I1126 20:53:36.150568  232430 provision.go:143] copyHostCerts
	I1126 20:53:36.150637  232430 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:53:36.150656  232430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:53:36.150733  232430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:53:36.150850  232430 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:53:36.150861  232430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:53:36.150889  232430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:53:36.150959  232430 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:53:36.150968  232430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:53:36.150995  232430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:53:36.151056  232430 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.newest-cni-583801 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-583801]
	I1126 20:53:36.403502  232430 provision.go:177] copyRemoteCerts
	I1126 20:53:36.403577  232430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:53:36.403620  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:36.421644  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:36.529798  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:53:36.550564  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:53:36.568751  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:53:36.587015  232430 provision.go:87] duration metric: took 452.872031ms to configureAuth
	I1126 20:53:36.587084  232430 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:53:36.587333  232430 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:36.587487  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:36.607919  232430 main.go:143] libmachine: Using SSH client type: native
	I1126 20:53:36.608234  232430 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:53:36.608248  232430 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:53:36.958850  232430 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:53:36.958868  232430 machine.go:97] duration metric: took 4.352038468s to provisionDockerMachine
	I1126 20:53:36.958880  232430 start.go:293] postStartSetup for "newest-cni-583801" (driver="docker")
	I1126 20:53:36.958891  232430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:53:36.958970  232430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:53:36.959007  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:36.981192  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:37.093758  232430 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:53:37.097114  232430 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:53:37.097141  232430 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:53:37.097152  232430 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:53:37.097210  232430 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:53:37.097285  232430 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:53:37.097387  232430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:53:37.104454  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:53:37.123849  232430 start.go:296] duration metric: took 164.954962ms for postStartSetup
	I1126 20:53:37.123942  232430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:53:37.123986  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:37.154074  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:37.255041  232430 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:53:37.259591  232430 fix.go:56] duration metric: took 4.977229179s for fixHost
	I1126 20:53:37.259622  232430 start.go:83] releasing machines lock for "newest-cni-583801", held for 4.977273748s
	I1126 20:53:37.259685  232430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-583801
	I1126 20:53:37.277023  232430 ssh_runner.go:195] Run: cat /version.json
	I1126 20:53:37.277072  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:37.277368  232430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:53:37.277419  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:37.295698  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:37.296359  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:37.397677  232430 ssh_runner.go:195] Run: systemctl --version
	I1126 20:53:37.512657  232430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:53:37.548900  232430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:53:37.554052  232430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:53:37.554156  232430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:53:37.562330  232430 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:53:37.562356  232430 start.go:496] detecting cgroup driver to use...
	I1126 20:53:37.562394  232430 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:53:37.562446  232430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:53:37.577492  232430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:53:37.593094  232430 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:53:37.593215  232430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:53:37.611076  232430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:53:37.624211  232430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:53:37.766892  232430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:53:37.889373  232430 docker.go:234] disabling docker service ...
	I1126 20:53:37.889436  232430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:53:37.904993  232430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:53:37.918557  232430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:53:38.036692  232430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:53:38.159223  232430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:53:38.173000  232430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:53:38.190826  232430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:53:38.190920  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.200346  232430 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:53:38.200425  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.209865  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.220265  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.229629  232430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:53:38.238736  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.249532  232430 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.258061  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.267114  232430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:53:38.274404  232430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:53:38.281917  232430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:53:38.418686  232430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:53:38.600747  232430 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:53:38.600819  232430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:53:38.604608  232430 start.go:564] Will wait 60s for crictl version
	I1126 20:53:38.604739  232430 ssh_runner.go:195] Run: which crictl
	I1126 20:53:38.608219  232430 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:53:38.636623  232430 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:53:38.636773  232430 ssh_runner.go:195] Run: crio --version
	I1126 20:53:38.665852  232430 ssh_runner.go:195] Run: crio --version
	I1126 20:53:38.696090  232430 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:53:38.698804  232430 cli_runner.go:164] Run: docker network inspect newest-cni-583801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:53:38.715485  232430 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:53:38.719345  232430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:53:38.731937  232430 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1126 20:53:38.734654  232430 kubeadm.go:884] updating cluster {Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:53:38.734808  232430 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:53:38.734877  232430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:53:38.768850  232430 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:53:38.768875  232430 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:53:38.768939  232430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:53:38.793625  232430 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:53:38.793649  232430 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:53:38.793658  232430 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1126 20:53:38.793759  232430 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-583801 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:53:38.793882  232430 ssh_runner.go:195] Run: crio config
	I1126 20:53:38.865073  232430 cni.go:84] Creating CNI manager for ""
	I1126 20:53:38.865138  232430 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:53:38.865169  232430 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1126 20:53:38.865220  232430 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-583801 NodeName:newest-cni-583801 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:53:38.865412  232430 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-583801"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:53:38.865499  232430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:53:38.874386  232430 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:53:38.874499  232430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:53:38.882476  232430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1126 20:53:38.895474  232430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:53:38.908580  232430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1126 20:53:38.936361  232430 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:53:38.940405  232430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:53:38.950570  232430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:53:39.073766  232430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:53:39.090036  232430 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801 for IP: 192.168.85.2
	I1126 20:53:39.090056  232430 certs.go:195] generating shared ca certs ...
	I1126 20:53:39.090071  232430 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:39.090217  232430 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:53:39.090268  232430 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:53:39.090280  232430 certs.go:257] generating profile certs ...
	I1126 20:53:39.090371  232430 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/client.key
	I1126 20:53:39.090439  232430 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.key.ec6d08a2
	I1126 20:53:39.090482  232430 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.key
	I1126 20:53:39.090624  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:53:39.090669  232430 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:53:39.090687  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:53:39.090717  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:53:39.090746  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:53:39.090782  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:53:39.090834  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:53:39.091409  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:53:39.111085  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:53:39.131081  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:53:39.150675  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:53:39.175703  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1126 20:53:39.193732  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:53:39.212217  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:53:39.230737  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:53:39.255424  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:53:39.283103  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:53:39.303012  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:53:39.330502  232430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:53:39.344168  232430 ssh_runner.go:195] Run: openssl version
	I1126 20:53:39.352490  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:53:39.362130  232430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:53:39.365900  232430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:53:39.365990  232430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:53:39.407962  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:53:39.415762  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:53:39.423668  232430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:53:39.428006  232430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:53:39.428120  232430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:53:39.472113  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:53:39.480458  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:53:39.488646  232430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:53:39.492457  232430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:53:39.492531  232430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:53:39.534127  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:53:39.542196  232430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:53:39.545815  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:53:39.589091  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:53:39.632705  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:53:39.674810  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:53:39.720209  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:53:39.768920  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:53:39.821001  232430 kubeadm.go:401] StartCluster: {Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:53:39.821148  232430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:53:39.821224  232430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:53:39.882517  232430 cri.go:89] found id: ""
	I1126 20:53:39.882634  232430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:53:39.891828  232430 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:53:39.891896  232430 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:53:39.891974  232430 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:53:39.908470  232430 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:53:39.909115  232430 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-583801" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:53:39.909412  232430 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-2326/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-583801" cluster setting kubeconfig missing "newest-cni-583801" context setting]
	I1126 20:53:39.909916  232430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:39.911663  232430 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:53:39.927837  232430 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1126 20:53:39.927910  232430 kubeadm.go:602] duration metric: took 35.995984ms to restartPrimaryControlPlane
	I1126 20:53:39.927935  232430 kubeadm.go:403] duration metric: took 106.942909ms to StartCluster
	I1126 20:53:39.927978  232430 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:39.928065  232430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:53:39.929037  232430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:39.929289  232430 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:53:39.929688  232430 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:39.929683  232430 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:53:39.929835  232430 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-583801"
	I1126 20:53:39.929857  232430 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-583801"
	W1126 20:53:39.929864  232430 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:53:39.929886  232430 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:39.930468  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:39.931431  232430 addons.go:70] Setting dashboard=true in profile "newest-cni-583801"
	I1126 20:53:39.931456  232430 addons.go:239] Setting addon dashboard=true in "newest-cni-583801"
	W1126 20:53:39.931463  232430 addons.go:248] addon dashboard should already be in state true
	I1126 20:53:39.931487  232430 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:39.931941  232430 addons.go:70] Setting default-storageclass=true in profile "newest-cni-583801"
	I1126 20:53:39.932129  232430 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-583801"
	I1126 20:53:39.931948  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:39.935682  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:39.936069  232430 out.go:179] * Verifying Kubernetes components...
	I1126 20:53:39.946487  232430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:53:39.986467  232430 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:53:39.992014  232430 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:53:39.992038  232430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:53:39.992103  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:39.997474  232430 addons.go:239] Setting addon default-storageclass=true in "newest-cni-583801"
	W1126 20:53:39.997502  232430 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:53:39.997527  232430 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:39.999962  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:40.036086  232430 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:53:40.042265  232430 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:53:40.048152  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:53:40.048197  232430 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:53:40.048290  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:40.050104  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:40.063728  232430 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:53:40.063750  232430 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:53:40.063811  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:40.094270  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:40.107034  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:40.301165  232430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:53:40.310971  232430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:53:40.367032  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:53:40.367104  232430 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:53:40.417315  232430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:53:40.431253  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:53:40.431315  232430 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:53:40.499903  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:53:40.499968  232430 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:53:40.572044  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:53:40.572108  232430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:53:40.627747  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:53:40.627818  232430 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:53:40.670335  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:53:40.670406  232430 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:53:40.693837  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:53:40.693908  232430 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:53:40.718979  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:53:40.719049  232430 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:53:40.745578  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:53:40.745659  232430 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:53:40.766507  232430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:53:49.061232  232430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.760022983s)
	I1126 20:53:49.061319  232430 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.750328431s)
	I1126 20:53:49.061346  232430 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:53:49.061404  232430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:53:49.061468  232430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.64408306s)
	I1126 20:53:49.282327  232430 api_server.go:72] duration metric: took 9.352983796s to wait for apiserver process to appear ...
	I1126 20:53:49.282351  232430 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:53:49.282368  232430 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:53:49.283534  232430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.516938484s)
	I1126 20:53:49.286516  232430 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-583801 addons enable metrics-server
	
	I1126 20:53:49.290067  232430 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	
	
	==> CRI-O <==
	Nov 26 20:53:34 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:34.860114567Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:34 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:34.867195088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:34 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:34.867925455Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:34 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:34.882465481Z" level=info msg="Created container 985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj/dashboard-metrics-scraper" id=e8acc919-e748-435f-80e5-36caf02f4cf3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:34 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:34.883180267Z" level=info msg="Starting container: 985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e" id=137f3beb-0d7e-412c-bbe9-9bad4eab5d61 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:53:34 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:34.888783365Z" level=info msg="Started container" PID=1665 containerID=985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj/dashboard-metrics-scraper id=137f3beb-0d7e-412c-bbe9-9bad4eab5d61 name=/runtime.v1.RuntimeService/StartContainer sandboxID=653ad756e8f2cbe7e6caf1a3f9888648498308c92c04ddf5a7113647713f2bd0
	Nov 26 20:53:34 default-k8s-diff-port-538119 conmon[1663]: conmon 985e2568eca0a4becf1e <ninfo>: container 1665 exited with status 1
	Nov 26 20:53:35 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:35.110527035Z" level=info msg="Removing container: 0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9" id=73e4e83a-dd97-4b06-a9c0-b426bbf6f7e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:53:35 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:35.122570508Z" level=info msg="Error loading conmon cgroup of container 0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9: cgroup deleted" id=73e4e83a-dd97-4b06-a9c0-b426bbf6f7e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:53:35 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:35.13183454Z" level=info msg="Removed container 0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj/dashboard-metrics-scraper" id=73e4e83a-dd97-4b06-a9c0-b426bbf6f7e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.667848359Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.675715Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.675883577Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.675966603Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.679598779Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.679749888Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.679821197Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.684385867Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.684550047Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.684629052Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.694212854Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.694377231Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.694452305Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.697638816Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 26 20:53:37 default-k8s-diff-port-538119 crio[654]: time="2025-11-26T20:53:37.697789006Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	985e2568eca0a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   2                   653ad756e8f2c       dashboard-metrics-scraper-6ffb444bf9-l2zcj             kubernetes-dashboard
	37d358b29691a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   430f0fdadcb3d       storage-provisioner                                    kube-system
	ebf08cb7657a6       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago       Running             kubernetes-dashboard        0                   80ce7f6a547d8       kubernetes-dashboard-855c9754f9-rktgh                  kubernetes-dashboard
	bbc4ffa86f03f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   425b9113eb9c3       coredns-66bc5c9577-whx45                               kube-system
	43b904d03cdf0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   a99b4e0b6c797       busybox                                                default
	1451c264cad0b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   ec255c27ec93e       kindnet-ts8sn                                          kube-system
	9edd7747a1eb7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   430f0fdadcb3d       storage-provisioner                                    kube-system
	d6cd6ce6790b4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   bf414567ff39b       kube-proxy-sp5l4                                       kube-system
	ebea4280eb674       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   d94b572b51fe8       kube-controller-manager-default-k8s-diff-port-538119   kube-system
	fc58d11ea9332       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   68c80761d551f       etcd-default-k8s-diff-port-538119                      kube-system
	220d1f4d36b36       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   689d0f1d37489       kube-apiserver-default-k8s-diff-port-538119            kube-system
	192c4461955e1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   bfff480c211ac       kube-scheduler-default-k8s-diff-port-538119            kube-system
	
	
	==> coredns [bbc4ffa86f03ffd0b7f32a69952e54fd4a11931def215ade5a35c91e6997fa4d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60250 - 6559 "HINFO IN 3437035850604151606.3091126414398460449. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017057241s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-538119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-538119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=default-k8s-diff-port-538119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_51_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:51:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-538119
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:53:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:53:26 +0000   Wed, 26 Nov 2025 20:51:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:53:26 +0000   Wed, 26 Nov 2025 20:51:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:53:26 +0000   Wed, 26 Nov 2025 20:51:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Nov 2025 20:53:26 +0000   Wed, 26 Nov 2025 20:52:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-538119
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                6b16bd81-d69e-4bbf-af91-d5d3d851d05d
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-whx45                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-default-k8s-diff-port-538119                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-ts8sn                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-538119             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-538119    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-sp5l4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-default-k8s-diff-port-538119             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-l2zcj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rktgh                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 52s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x8 over 2m32s)  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m22s                  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m22s                  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m22s                  kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m18s                  node-controller  Node default-k8s-diff-port-538119 event: Registered Node default-k8s-diff-port-538119 in Controller
	  Normal   NodeReady                96s                    kubelet          Node default-k8s-diff-port-538119 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-538119 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                    node-controller  Node default-k8s-diff-port-538119 event: Registered Node default-k8s-diff-port-538119 in Controller
	
	
	==> dmesg <==
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	[Nov26 20:48] overlayfs: idmapped layers are currently not supported
	[Nov26 20:49] overlayfs: idmapped layers are currently not supported
	[Nov26 20:50] overlayfs: idmapped layers are currently not supported
	[Nov26 20:51] overlayfs: idmapped layers are currently not supported
	[ +24.066506] overlayfs: idmapped layers are currently not supported
	[Nov26 20:52] overlayfs: idmapped layers are currently not supported
	[Nov26 20:53] overlayfs: idmapped layers are currently not supported
	[ +25.622621] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fc58d11ea93321e33cff7333a94130c39e21c09f52f801603b1a6a3a6ad98d31] <==
	{"level":"warn","ts":"2025-11-26T20:52:54.170519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.206223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.226117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.258932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.260949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.278977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.305331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.330679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.343323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.378607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.391194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.413559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.441675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.466890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.496256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.538484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.553297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.598947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.631605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.661133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.723290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.756191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.777120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:54.805566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:52:55.021090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55288","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:53:50 up  1:36,  0 user,  load average: 4.18, 3.51, 2.75
	Linux default-k8s-diff-port-538119 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1451c264cad0b3e134f425cba27c32de088a5ac1e0f20d19dcac5bb5fac0b13d] <==
	I1126 20:52:57.470063       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:52:57.470257       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1126 20:52:57.470381       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:52:57.470393       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:52:57.470405       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:52:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:52:57.727972       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:52:57.728074       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:52:57.728110       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:52:57.728536       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1126 20:53:27.667073       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1126 20:53:27.728777       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1126 20:53:27.728777       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1126 20:53:27.728999       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1126 20:53:29.129011       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1126 20:53:29.129057       1 metrics.go:72] Registering metrics
	I1126 20:53:29.129112       1 controller.go:711] "Syncing nftables rules"
	I1126 20:53:37.667480       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:53:37.667616       1 main.go:301] handling current node
	I1126 20:53:47.670256       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1126 20:53:47.670295       1 main.go:301] handling current node
	
	
	==> kube-apiserver [220d1f4d36b36e980115005c48030f8c1bcbf01b34d094b15f89d89ca0ae205f] <==
	I1126 20:52:55.994349       1 policy_source.go:240] refreshing policies
	I1126 20:52:55.995155       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:52:55.995172       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:52:55.995179       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:52:55.995185       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:52:56.017787       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1126 20:52:56.028457       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1126 20:52:56.028479       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1126 20:52:56.028649       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1126 20:52:56.028735       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1126 20:52:56.033836       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1126 20:52:56.051542       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:52:56.083749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1126 20:52:56.169293       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1126 20:52:56.635790       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:52:56.871494       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:52:57.126832       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:52:57.334055       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:52:57.438619       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:52:57.490059       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:52:57.619270       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.81.183"}
	I1126 20:52:57.645683       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.198.229"}
	I1126 20:52:59.876514       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:52:59.926464       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:52:59.980983       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ebea4280eb674478aadbae605d2061b7c068854e5d7ec7d5b4fb24f16fe0cfb9] <==
	I1126 20:52:59.499861       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1126 20:52:59.499861       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1126 20:52:59.500480       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1126 20:52:59.500844       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 20:52:59.509281       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1126 20:52:59.514022       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:52:59.517315       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1126 20:52:59.518512       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1126 20:52:59.518553       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:52:59.518578       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1126 20:52:59.518698       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1126 20:52:59.518759       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:52:59.519251       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:52:59.519522       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:52:59.520682       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 20:52:59.524495       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1126 20:52:59.551800       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:52:59.551891       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:52:59.551940       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:52:59.551950       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:52:59.551957       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:52:59.556130       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1126 20:52:59.569322       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1126 20:52:59.569346       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1126 20:52:59.569355       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [d6cd6ce6790b4b0fda712fb3190ae2bd302a3535807ba5a84ec859b03d974194] <==
	I1126 20:52:57.563267       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:52:57.700464       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:52:57.805110       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:52:57.805146       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1126 20:52:57.805264       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:52:57.840671       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:52:57.840795       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:52:57.844885       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:52:57.845363       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:52:57.845427       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:52:57.849376       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:52:57.849397       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:52:57.849682       1 config.go:200] "Starting service config controller"
	I1126 20:52:57.849697       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:52:57.850022       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:52:57.850034       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:52:57.850331       1 config.go:309] "Starting node config controller"
	I1126 20:52:57.850377       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:52:57.850405       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:52:57.950101       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:52:57.950109       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1126 20:52:57.950137       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [192c4461955e12aeca35caebeb96aaa6b7c140e0c20bce5b442625309d73063a] <==
	I1126 20:52:52.271405       1 serving.go:386] Generated self-signed cert in-memory
	W1126 20:52:55.890215       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1126 20:52:55.890249       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1126 20:52:55.890259       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1126 20:52:55.890269       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1126 20:52:56.048115       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:52:56.048152       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:52:56.083429       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:52:56.083559       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:52:56.083595       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:52:56.083614       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:52:56.189610       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:53:00 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:00.399055     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2k5d\" (UniqueName: \"kubernetes.io/projected/975abbcd-6e87-4996-aeef-10e9c652170b-kube-api-access-b2k5d\") pod \"kubernetes-dashboard-855c9754f9-rktgh\" (UID: \"975abbcd-6e87-4996-aeef-10e9c652170b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rktgh"
	Nov 26 20:53:00 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:00.399703     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/975abbcd-6e87-4996-aeef-10e9c652170b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rktgh\" (UID: \"975abbcd-6e87-4996-aeef-10e9c652170b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rktgh"
	Nov 26 20:53:00 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:00.399916     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-l2zcj\" (UID: \"d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj"
	Nov 26 20:53:00 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:00.400102     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrvw8\" (UniqueName: \"kubernetes.io/projected/d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4-kube-api-access-vrvw8\") pod \"dashboard-metrics-scraper-6ffb444bf9-l2zcj\" (UID: \"d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj"
	Nov 26 20:53:00 default-k8s-diff-port-538119 kubelet[782]: W1126 20:53:00.689302     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/crio-80ce7f6a547d819afc5551ebda5a6cfdee96aa96284cd9fd2565054f1c0807b5 WatchSource:0}: Error finding container 80ce7f6a547d819afc5551ebda5a6cfdee96aa96284cd9fd2565054f1c0807b5: Status 404 returned error can't find the container with id 80ce7f6a547d819afc5551ebda5a6cfdee96aa96284cd9fd2565054f1c0807b5
	Nov 26 20:53:00 default-k8s-diff-port-538119 kubelet[782]: W1126 20:53:00.724781     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0376b85fe7a8b79eee4ab77cd4f935c2c82c8e466c83a14f66dd123695e7ad45/crio-653ad756e8f2cbe7e6caf1a3f9888648498308c92c04ddf5a7113647713f2bd0 WatchSource:0}: Error finding container 653ad756e8f2cbe7e6caf1a3f9888648498308c92c04ddf5a7113647713f2bd0: Status 404 returned error can't find the container with id 653ad756e8f2cbe7e6caf1a3f9888648498308c92c04ddf5a7113647713f2bd0
	Nov 26 20:53:08 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:08.061289     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rktgh" podStartSLOduration=1.449661976 podStartE2EDuration="8.061178627s" podCreationTimestamp="2025-11-26 20:53:00 +0000 UTC" firstStartedPulling="2025-11-26 20:53:00.692225084 +0000 UTC m=+12.137518420" lastFinishedPulling="2025-11-26 20:53:07.303741736 +0000 UTC m=+18.749035071" observedRunningTime="2025-11-26 20:53:08.060764025 +0000 UTC m=+19.506057369" watchObservedRunningTime="2025-11-26 20:53:08.061178627 +0000 UTC m=+19.506471963"
	Nov 26 20:53:14 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:14.043572     782 scope.go:117] "RemoveContainer" containerID="f11f49c214d4ed0c7934c1b1f8b7d2fe38c0ce44ed9be20a394365ebea6c33d0"
	Nov 26 20:53:15 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:15.048287     782 scope.go:117] "RemoveContainer" containerID="f11f49c214d4ed0c7934c1b1f8b7d2fe38c0ce44ed9be20a394365ebea6c33d0"
	Nov 26 20:53:15 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:15.048602     782 scope.go:117] "RemoveContainer" containerID="0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9"
	Nov 26 20:53:15 default-k8s-diff-port-538119 kubelet[782]: E1126 20:53:15.048757     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2zcj_kubernetes-dashboard(d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj" podUID="d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4"
	Nov 26 20:53:16 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:16.052464     782 scope.go:117] "RemoveContainer" containerID="0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9"
	Nov 26 20:53:16 default-k8s-diff-port-538119 kubelet[782]: E1126 20:53:16.052635     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2zcj_kubernetes-dashboard(d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj" podUID="d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4"
	Nov 26 20:53:20 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:20.644133     782 scope.go:117] "RemoveContainer" containerID="0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9"
	Nov 26 20:53:20 default-k8s-diff-port-538119 kubelet[782]: E1126 20:53:20.644333     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2zcj_kubernetes-dashboard(d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj" podUID="d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4"
	Nov 26 20:53:28 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:28.086111     782 scope.go:117] "RemoveContainer" containerID="9edd7747a1eb77ffab56dbbfa69d70a61e1dc6edec2dbb9c8873ad6e848517d0"
	Nov 26 20:53:34 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:34.856842     782 scope.go:117] "RemoveContainer" containerID="0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9"
	Nov 26 20:53:35 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:35.106229     782 scope.go:117] "RemoveContainer" containerID="0139b56cbe6ccd61f4181a5e38baa5da9adcc805061e2fc103d9e708c4925ac9"
	Nov 26 20:53:35 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:35.109292     782 scope.go:117] "RemoveContainer" containerID="985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e"
	Nov 26 20:53:35 default-k8s-diff-port-538119 kubelet[782]: E1126 20:53:35.109483     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2zcj_kubernetes-dashboard(d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj" podUID="d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4"
	Nov 26 20:53:40 default-k8s-diff-port-538119 kubelet[782]: I1126 20:53:40.643385     782 scope.go:117] "RemoveContainer" containerID="985e2568eca0a4becf1e24621e6d2150c8b96cad4193d2322f5987f37c09d62e"
	Nov 26 20:53:40 default-k8s-diff-port-538119 kubelet[782]: E1126 20:53:40.643574     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l2zcj_kubernetes-dashboard(d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l2zcj" podUID="d9c94490-4d0e-4dc4-9b6a-34a9a0119fa4"
	Nov 26 20:53:43 default-k8s-diff-port-538119 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:53:43 default-k8s-diff-port-538119 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:53:43 default-k8s-diff-port-538119 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ebf08cb7657a6ca910fdbb8f925d3bb2d31f344e7692e636ce0c0a3e75654569] <==
	2025/11/26 20:53:07 Using namespace: kubernetes-dashboard
	2025/11/26 20:53:07 Using in-cluster config to connect to apiserver
	2025/11/26 20:53:07 Using secret token for csrf signing
	2025/11/26 20:53:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/26 20:53:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/26 20:53:07 Successful initial request to the apiserver, version: v1.34.1
	2025/11/26 20:53:07 Generating JWE encryption key
	2025/11/26 20:53:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/26 20:53:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/26 20:53:09 Initializing JWE encryption key from synchronized object
	2025/11/26 20:53:09 Creating in-cluster Sidecar client
	2025/11/26 20:53:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:53:09 Serving insecurely on HTTP port: 9090
	2025/11/26 20:53:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/26 20:53:07 Starting overwatch
	
	
	==> storage-provisioner [37d358b29691acefbe7a5309e329f27200aa8514dd0f7f283352c3b4cd48c2a1] <==
	I1126 20:53:28.155784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1126 20:53:28.169193       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1126 20:53:28.169303       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1126 20:53:28.178889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:31.633520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:35.897341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:39.496379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:42.557945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:45.583705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:45.595974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:53:45.596189       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1126 20:53:45.596410       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-538119_6b7e51c9-57ed-4396-a194-b95821c5a632!
	I1126 20:53:45.597356       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"73b04fd0-3ce6-4808-aac2-0c1574a9d61f", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-538119_6b7e51c9-57ed-4396-a194-b95821c5a632 became leader
	W1126 20:53:45.631315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:45.643398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1126 20:53:45.696727       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-538119_6b7e51c9-57ed-4396-a194-b95821c5a632!
	W1126 20:53:47.652051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:47.667810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:49.674490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1126 20:53:49.682627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9edd7747a1eb77ffab56dbbfa69d70a61e1dc6edec2dbb9c8873ad6e848517d0] <==
	I1126 20:52:57.353509       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1126 20:53:27.355255       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119: exit status 2 (527.320993ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-538119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (9.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-583801 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-583801 --alsologtostderr -v=1: exit status 80 (2.67521288s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-583801 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:53:50.917132  235014 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:53:50.917355  235014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:53:50.917382  235014 out.go:374] Setting ErrFile to fd 2...
	I1126 20:53:50.917402  235014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:53:50.917681  235014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:53:50.918021  235014 out.go:368] Setting JSON to false
	I1126 20:53:50.918076  235014 mustload.go:66] Loading cluster: newest-cni-583801
	I1126 20:53:50.918565  235014 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:50.919067  235014 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:50.944768  235014 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:50.945077  235014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:53:51.060628  235014 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-26 20:53:51.049411836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:53:51.061258  235014 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-583801 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1126 20:53:51.065798  235014 out.go:179] * Pausing node newest-cni-583801 ... 
	I1126 20:53:51.068673  235014 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:51.069024  235014 ssh_runner.go:195] Run: systemctl --version
	I1126 20:53:51.069067  235014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:51.098592  235014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:51.228375  235014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:53:51.256642  235014 pause.go:52] kubelet running: true
	I1126 20:53:51.256703  235014 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:53:51.733434  235014 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:53:51.733526  235014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:53:51.886607  235014 cri.go:89] found id: "3dc8c2d4b980c014ffa1491dd391550e7f5a7a93ef2c30bc9b3edaf00dc0d2b5"
	I1126 20:53:51.886630  235014 cri.go:89] found id: "17d091197d693185b259153ccacb33eeee4c1ba53fb28487a00287fc279ec0cb"
	I1126 20:53:51.886635  235014 cri.go:89] found id: "4cc796441637eb0023f026a71d7a376933ef2ada9d9fc1eda956dc4f4f216436"
	I1126 20:53:51.886638  235014 cri.go:89] found id: "44bd96cfffedfd12fecaf434158fdab836106a139ff228a697ceeaf1ca7a1314"
	I1126 20:53:51.886641  235014 cri.go:89] found id: "c095077f35df8e2656c94a79f65541cd81179ffafbefae7b7e437bf363947b4c"
	I1126 20:53:51.886645  235014 cri.go:89] found id: "e09245036c46e645d32534534df4df30de7d27d56a2594110164810bc26e056a"
	I1126 20:53:51.886648  235014 cri.go:89] found id: ""
	I1126 20:53:51.886696  235014 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:53:51.916079  235014 retry.go:31] will retry after 282.556387ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:53:51Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:53:52.199529  235014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:53:52.243756  235014 pause.go:52] kubelet running: false
	I1126 20:53:52.243818  235014 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:53:52.574896  235014 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:53:52.574982  235014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:53:52.718732  235014 cri.go:89] found id: "3dc8c2d4b980c014ffa1491dd391550e7f5a7a93ef2c30bc9b3edaf00dc0d2b5"
	I1126 20:53:52.718759  235014 cri.go:89] found id: "17d091197d693185b259153ccacb33eeee4c1ba53fb28487a00287fc279ec0cb"
	I1126 20:53:52.718764  235014 cri.go:89] found id: "4cc796441637eb0023f026a71d7a376933ef2ada9d9fc1eda956dc4f4f216436"
	I1126 20:53:52.718767  235014 cri.go:89] found id: "44bd96cfffedfd12fecaf434158fdab836106a139ff228a697ceeaf1ca7a1314"
	I1126 20:53:52.718770  235014 cri.go:89] found id: "c095077f35df8e2656c94a79f65541cd81179ffafbefae7b7e437bf363947b4c"
	I1126 20:53:52.718773  235014 cri.go:89] found id: "e09245036c46e645d32534534df4df30de7d27d56a2594110164810bc26e056a"
	I1126 20:53:52.718776  235014 cri.go:89] found id: ""
	I1126 20:53:52.718822  235014 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:53:52.739442  235014 retry.go:31] will retry after 481.072011ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:53:52Z" level=error msg="open /run/runc: no such file or directory"
	I1126 20:53:53.220685  235014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:53:53.233864  235014 pause.go:52] kubelet running: false
	I1126 20:53:53.233964  235014 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1126 20:53:53.379385  235014 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1126 20:53:53.379466  235014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1126 20:53:53.460915  235014 cri.go:89] found id: "3dc8c2d4b980c014ffa1491dd391550e7f5a7a93ef2c30bc9b3edaf00dc0d2b5"
	I1126 20:53:53.460936  235014 cri.go:89] found id: "17d091197d693185b259153ccacb33eeee4c1ba53fb28487a00287fc279ec0cb"
	I1126 20:53:53.460941  235014 cri.go:89] found id: "4cc796441637eb0023f026a71d7a376933ef2ada9d9fc1eda956dc4f4f216436"
	I1126 20:53:53.460945  235014 cri.go:89] found id: "44bd96cfffedfd12fecaf434158fdab836106a139ff228a697ceeaf1ca7a1314"
	I1126 20:53:53.460948  235014 cri.go:89] found id: "c095077f35df8e2656c94a79f65541cd81179ffafbefae7b7e437bf363947b4c"
	I1126 20:53:53.460952  235014 cri.go:89] found id: "e09245036c46e645d32534534df4df30de7d27d56a2594110164810bc26e056a"
	I1126 20:53:53.460955  235014 cri.go:89] found id: ""
	I1126 20:53:53.461004  235014 ssh_runner.go:195] Run: sudo runc list -f json
	I1126 20:53:53.491329  235014 out.go:203] 
	W1126 20:53:53.494353  235014 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:53:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T20:53:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1126 20:53:53.494540  235014 out.go:285] * 
	* 
	W1126 20:53:53.506070  235014 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1126 20:53:53.511884  235014 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-583801 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-583801
helpers_test.go:243: (dbg) docker inspect newest-cni-583801:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c",
	        "Created": "2025-11-26T20:52:53.985671529Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 232561,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:53:32.336682851Z",
	            "FinishedAt": "2025-11-26T20:53:31.462991106Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c/hosts",
	        "LogPath": "/var/lib/docker/containers/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c-json.log",
	        "Name": "/newest-cni-583801",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-583801:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-583801",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c",
	                "LowerDir": "/var/lib/docker/overlay2/f23a4729fa6ded3a1a8ccc66cde534e546b45b2bd8d04f55047b513a2d3a9186-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f23a4729fa6ded3a1a8ccc66cde534e546b45b2bd8d04f55047b513a2d3a9186/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f23a4729fa6ded3a1a8ccc66cde534e546b45b2bd8d04f55047b513a2d3a9186/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f23a4729fa6ded3a1a8ccc66cde534e546b45b2bd8d04f55047b513a2d3a9186/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-583801",
	                "Source": "/var/lib/docker/volumes/newest-cni-583801/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-583801",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-583801",
	                "name.minikube.sigs.k8s.io": "newest-cni-583801",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "170a7eda8bf6e0b36d4a4e371a61ad2b6ca16418ee67f4f54df97cb757c81de8",
	            "SandboxKey": "/var/run/docker/netns/170a7eda8bf6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-583801": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:5e:9c:73:47:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e35a642217b331a1c1ac5d84616493887df16b6946bf83ba7ad44b2d7f7799d7",
	                    "EndpointID": "f6133aeb84521e4dcafb3e8fe54ac34c7650b73f55531ba70ff98743d797646a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-583801",
	                        "c96a716e290f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-583801 -n newest-cni-583801
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-583801 -n newest-cni-583801: exit status 2 (444.842603ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-583801 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-583801 logs -n 25: (1.502760796s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-956694                                                                                                                                                                                                                          │ no-preload-956694            │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ delete  │ -p disable-driver-mounts-180932                                                                                                                                                                                                               │ disable-driver-mounts-180932 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:50 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-616586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │                     │
	│ stop    │ -p embed-certs-616586 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-616586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538119 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ image   │ embed-certs-616586 image list --format=json                                                                                                                                                                                                   │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ pause   │ -p embed-certs-616586 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:53 UTC │
	│ delete  │ -p embed-certs-616586                                                                                                                                                                                                                         │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ delete  │ -p embed-certs-616586                                                                                                                                                                                                                         │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ start   │ -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-583801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	│ stop    │ -p newest-cni-583801 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-583801 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ start   │ -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ image   │ default-k8s-diff-port-538119 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ pause   │ -p default-k8s-diff-port-538119 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	│ image   │ newest-cni-583801 image list --format=json                                                                                                                                                                                                    │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ pause   │ -p newest-cni-583801 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-538119                                                                                                                                                                                                               │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:53:32
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:53:32.058363  232430 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:53:32.058691  232430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:53:32.058726  232430 out.go:374] Setting ErrFile to fd 2...
	I1126 20:53:32.058747  232430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:53:32.059051  232430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:53:32.059474  232430 out.go:368] Setting JSON to false
	I1126 20:53:32.060471  232430 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5742,"bootTime":1764184670,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:53:32.060576  232430 start.go:143] virtualization:  
	I1126 20:53:32.063909  232430 out.go:179] * [newest-cni-583801] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:53:32.067898  232430 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:53:32.067980  232430 notify.go:221] Checking for updates...
	I1126 20:53:32.074206  232430 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:53:32.077087  232430 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:53:32.080000  232430 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:53:32.083011  232430 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:53:32.086000  232430 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:53:32.089464  232430 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:32.090230  232430 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:53:32.123576  232430 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:53:32.123684  232430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:53:32.181819  232430 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:53:32.171614062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:53:32.181945  232430 docker.go:319] overlay module found
	I1126 20:53:32.185115  232430 out.go:179] * Using the docker driver based on existing profile
	I1126 20:53:32.187998  232430 start.go:309] selected driver: docker
	I1126 20:53:32.188016  232430 start.go:927] validating driver "docker" against &{Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:53:32.188123  232430 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:53:32.188873  232430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:53:32.247743  232430 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:53:32.237861309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:53:32.248097  232430 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:53:32.248130  232430 cni.go:84] Creating CNI manager for ""
	I1126 20:53:32.248192  232430 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:53:32.248235  232430 start.go:353] cluster config:
	{Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:53:32.253258  232430 out.go:179] * Starting "newest-cni-583801" primary control-plane node in "newest-cni-583801" cluster
	I1126 20:53:32.256177  232430 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:53:32.259057  232430 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:53:32.262071  232430 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:53:32.262125  232430 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:53:32.262125  232430 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:53:32.262135  232430 cache.go:65] Caching tarball of preloaded images
	I1126 20:53:32.262351  232430 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:53:32.262363  232430 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:53:32.262584  232430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/config.json ...
	I1126 20:53:32.282185  232430 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:53:32.282208  232430 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:53:32.282228  232430 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:53:32.282258  232430 start.go:360] acquireMachinesLock for newest-cni-583801: {Name:mk5a5c4e74106a93e4d595458226ad93568e2c2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:53:32.282328  232430 start.go:364] duration metric: took 46.324µs to acquireMachinesLock for "newest-cni-583801"
	I1126 20:53:32.282350  232430 start.go:96] Skipping create...Using existing machine configuration
	I1126 20:53:32.282356  232430 fix.go:54] fixHost starting: 
	I1126 20:53:32.282629  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:32.299954  232430 fix.go:112] recreateIfNeeded on newest-cni-583801: state=Stopped err=<nil>
	W1126 20:53:32.299985  232430 fix.go:138] unexpected machine state, will restart: <nil>
	I1126 20:53:32.303287  232430 out.go:252] * Restarting existing docker container for "newest-cni-583801" ...
	I1126 20:53:32.303379  232430 cli_runner.go:164] Run: docker start newest-cni-583801
	I1126 20:53:32.554974  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:32.585873  232430 kic.go:430] container "newest-cni-583801" state is running.
	I1126 20:53:32.586285  232430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-583801
	I1126 20:53:32.606597  232430 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/config.json ...
	I1126 20:53:32.606821  232430 machine.go:94] provisionDockerMachine start ...
	I1126 20:53:32.606878  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:32.629439  232430 main.go:143] libmachine: Using SSH client type: native
	I1126 20:53:32.630040  232430 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:53:32.630056  232430 main.go:143] libmachine: About to run SSH command:
	hostname
	I1126 20:53:32.630649  232430 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40338->127.0.0.1:33088: read: connection reset by peer
	I1126 20:53:35.790036  232430 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-583801
	
	I1126 20:53:35.790065  232430 ubuntu.go:182] provisioning hostname "newest-cni-583801"
	I1126 20:53:35.790129  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:35.808513  232430 main.go:143] libmachine: Using SSH client type: native
	I1126 20:53:35.808921  232430 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:53:35.808938  232430 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-583801 && echo "newest-cni-583801" | sudo tee /etc/hostname
	I1126 20:53:35.968272  232430 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-583801
	
	I1126 20:53:35.968372  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:35.985281  232430 main.go:143] libmachine: Using SSH client type: native
	I1126 20:53:35.985588  232430 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:53:35.985608  232430 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-583801' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-583801/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-583801' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1126 20:53:36.134065  232430 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1126 20:53:36.134089  232430 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21974-2326/.minikube CaCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21974-2326/.minikube}
	I1126 20:53:36.134110  232430 ubuntu.go:190] setting up certificates
	I1126 20:53:36.134120  232430 provision.go:84] configureAuth start
	I1126 20:53:36.134186  232430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-583801
	I1126 20:53:36.150568  232430 provision.go:143] copyHostCerts
	I1126 20:53:36.150637  232430 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem, removing ...
	I1126 20:53:36.150656  232430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem
	I1126 20:53:36.150733  232430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/ca.pem (1078 bytes)
	I1126 20:53:36.150850  232430 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem, removing ...
	I1126 20:53:36.150861  232430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem
	I1126 20:53:36.150889  232430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/cert.pem (1123 bytes)
	I1126 20:53:36.150959  232430 exec_runner.go:144] found /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem, removing ...
	I1126 20:53:36.150968  232430 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem
	I1126 20:53:36.150995  232430 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21974-2326/.minikube/key.pem (1675 bytes)
	I1126 20:53:36.151056  232430 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem org=jenkins.newest-cni-583801 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-583801]
	I1126 20:53:36.403502  232430 provision.go:177] copyRemoteCerts
	I1126 20:53:36.403577  232430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1126 20:53:36.403620  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:36.421644  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:36.529798  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1126 20:53:36.550564  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1126 20:53:36.568751  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1126 20:53:36.587015  232430 provision.go:87] duration metric: took 452.872031ms to configureAuth
	I1126 20:53:36.587084  232430 ubuntu.go:206] setting minikube options for container-runtime
	I1126 20:53:36.587333  232430 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:36.587487  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:36.607919  232430 main.go:143] libmachine: Using SSH client type: native
	I1126 20:53:36.608234  232430 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1126 20:53:36.608248  232430 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1126 20:53:36.958850  232430 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1126 20:53:36.958868  232430 machine.go:97] duration metric: took 4.352038468s to provisionDockerMachine
	I1126 20:53:36.958880  232430 start.go:293] postStartSetup for "newest-cni-583801" (driver="docker")
	I1126 20:53:36.958891  232430 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1126 20:53:36.958970  232430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1126 20:53:36.959007  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:36.981192  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:37.093758  232430 ssh_runner.go:195] Run: cat /etc/os-release
	I1126 20:53:37.097114  232430 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1126 20:53:37.097141  232430 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1126 20:53:37.097152  232430 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/addons for local assets ...
	I1126 20:53:37.097210  232430 filesync.go:126] Scanning /home/jenkins/minikube-integration/21974-2326/.minikube/files for local assets ...
	I1126 20:53:37.097285  232430 filesync.go:149] local asset: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem -> 41292.pem in /etc/ssl/certs
	I1126 20:53:37.097387  232430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1126 20:53:37.104454  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:53:37.123849  232430 start.go:296] duration metric: took 164.954962ms for postStartSetup
	I1126 20:53:37.123942  232430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:53:37.123986  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:37.154074  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:37.255041  232430 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1126 20:53:37.259591  232430 fix.go:56] duration metric: took 4.977229179s for fixHost
	I1126 20:53:37.259622  232430 start.go:83] releasing machines lock for "newest-cni-583801", held for 4.977273748s
	I1126 20:53:37.259685  232430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-583801
	I1126 20:53:37.277023  232430 ssh_runner.go:195] Run: cat /version.json
	I1126 20:53:37.277072  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:37.277368  232430 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1126 20:53:37.277419  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:37.295698  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:37.296359  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:37.397677  232430 ssh_runner.go:195] Run: systemctl --version
	I1126 20:53:37.512657  232430 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1126 20:53:37.548900  232430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1126 20:53:37.554052  232430 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1126 20:53:37.554156  232430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1126 20:53:37.562330  232430 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1126 20:53:37.562356  232430 start.go:496] detecting cgroup driver to use...
	I1126 20:53:37.562394  232430 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1126 20:53:37.562446  232430 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1126 20:53:37.577492  232430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1126 20:53:37.593094  232430 docker.go:218] disabling cri-docker service (if available) ...
	I1126 20:53:37.593215  232430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1126 20:53:37.611076  232430 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1126 20:53:37.624211  232430 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1126 20:53:37.766892  232430 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1126 20:53:37.889373  232430 docker.go:234] disabling docker service ...
	I1126 20:53:37.889436  232430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1126 20:53:37.904993  232430 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1126 20:53:37.918557  232430 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1126 20:53:38.036692  232430 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1126 20:53:38.159223  232430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1126 20:53:38.173000  232430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1126 20:53:38.190826  232430 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1126 20:53:38.190920  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.200346  232430 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1126 20:53:38.200425  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.209865  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.220265  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.229629  232430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1126 20:53:38.238736  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.249532  232430 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.258061  232430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1126 20:53:38.267114  232430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1126 20:53:38.274404  232430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1126 20:53:38.281917  232430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:53:38.418686  232430 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1126 20:53:38.600747  232430 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1126 20:53:38.600819  232430 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1126 20:53:38.604608  232430 start.go:564] Will wait 60s for crictl version
	I1126 20:53:38.604739  232430 ssh_runner.go:195] Run: which crictl
	I1126 20:53:38.608219  232430 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1126 20:53:38.636623  232430 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1126 20:53:38.636773  232430 ssh_runner.go:195] Run: crio --version
	I1126 20:53:38.665852  232430 ssh_runner.go:195] Run: crio --version
	I1126 20:53:38.696090  232430 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1126 20:53:38.698804  232430 cli_runner.go:164] Run: docker network inspect newest-cni-583801 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1126 20:53:38.715485  232430 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1126 20:53:38.719345  232430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:53:38.731937  232430 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1126 20:53:38.734654  232430 kubeadm.go:884] updating cluster {Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1126 20:53:38.734808  232430 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:53:38.734877  232430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:53:38.768850  232430 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:53:38.768875  232430 crio.go:433] Images already preloaded, skipping extraction
	I1126 20:53:38.768939  232430 ssh_runner.go:195] Run: sudo crictl images --output json
	I1126 20:53:38.793625  232430 crio.go:514] all images are preloaded for cri-o runtime.
	I1126 20:53:38.793649  232430 cache_images.go:86] Images are preloaded, skipping loading
	I1126 20:53:38.793658  232430 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1126 20:53:38.793759  232430 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-583801 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1126 20:53:38.793882  232430 ssh_runner.go:195] Run: crio config
	I1126 20:53:38.865073  232430 cni.go:84] Creating CNI manager for ""
	I1126 20:53:38.865138  232430 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:53:38.865169  232430 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1126 20:53:38.865220  232430 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-583801 NodeName:newest-cni-583801 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1126 20:53:38.865412  232430 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-583801"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1126 20:53:38.865499  232430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1126 20:53:38.874386  232430 binaries.go:51] Found k8s binaries, skipping transfer
	I1126 20:53:38.874499  232430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1126 20:53:38.882476  232430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1126 20:53:38.895474  232430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1126 20:53:38.908580  232430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1126 20:53:38.936361  232430 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1126 20:53:38.940405  232430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1126 20:53:38.950570  232430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:53:39.073766  232430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:53:39.090036  232430 certs.go:69] Setting up /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801 for IP: 192.168.85.2
	I1126 20:53:39.090056  232430 certs.go:195] generating shared ca certs ...
	I1126 20:53:39.090071  232430 certs.go:227] acquiring lock for ca certs: {Name:mk6624f5dc47de70a2a392df95b2ee1f3043c770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:39.090217  232430 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key
	I1126 20:53:39.090268  232430 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key
	I1126 20:53:39.090280  232430 certs.go:257] generating profile certs ...
	I1126 20:53:39.090371  232430 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/client.key
	I1126 20:53:39.090439  232430 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.key.ec6d08a2
	I1126 20:53:39.090482  232430 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.key
	I1126 20:53:39.090624  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem (1338 bytes)
	W1126 20:53:39.090669  232430 certs.go:480] ignoring /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129_empty.pem, impossibly tiny 0 bytes
	I1126 20:53:39.090687  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca-key.pem (1675 bytes)
	I1126 20:53:39.090717  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/ca.pem (1078 bytes)
	I1126 20:53:39.090746  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/cert.pem (1123 bytes)
	I1126 20:53:39.090782  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/certs/key.pem (1675 bytes)
	I1126 20:53:39.090834  232430 certs.go:484] found cert: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem (1708 bytes)
	I1126 20:53:39.091409  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1126 20:53:39.111085  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1126 20:53:39.131081  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1126 20:53:39.150675  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1126 20:53:39.175703  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1126 20:53:39.193732  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1126 20:53:39.212217  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1126 20:53:39.230737  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/newest-cni-583801/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1126 20:53:39.255424  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1126 20:53:39.283103  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/certs/4129.pem --> /usr/share/ca-certificates/4129.pem (1338 bytes)
	I1126 20:53:39.303012  232430 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/ssl/certs/41292.pem --> /usr/share/ca-certificates/41292.pem (1708 bytes)
	I1126 20:53:39.330502  232430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1126 20:53:39.344168  232430 ssh_runner.go:195] Run: openssl version
	I1126 20:53:39.352490  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1126 20:53:39.362130  232430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:53:39.365900  232430 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 26 19:37 /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:53:39.365990  232430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1126 20:53:39.407962  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1126 20:53:39.415762  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4129.pem && ln -fs /usr/share/ca-certificates/4129.pem /etc/ssl/certs/4129.pem"
	I1126 20:53:39.423668  232430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4129.pem
	I1126 20:53:39.428006  232430 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 26 19:43 /usr/share/ca-certificates/4129.pem
	I1126 20:53:39.428120  232430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4129.pem
	I1126 20:53:39.472113  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4129.pem /etc/ssl/certs/51391683.0"
	I1126 20:53:39.480458  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41292.pem && ln -fs /usr/share/ca-certificates/41292.pem /etc/ssl/certs/41292.pem"
	I1126 20:53:39.488646  232430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41292.pem
	I1126 20:53:39.492457  232430 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 26 19:43 /usr/share/ca-certificates/41292.pem
	I1126 20:53:39.492531  232430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41292.pem
	I1126 20:53:39.534127  232430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41292.pem /etc/ssl/certs/3ec20f2e.0"
	I1126 20:53:39.542196  232430 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1126 20:53:39.545815  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1126 20:53:39.589091  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1126 20:53:39.632705  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1126 20:53:39.674810  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1126 20:53:39.720209  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1126 20:53:39.768920  232430 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1126 20:53:39.821001  232430 kubeadm.go:401] StartCluster: {Name:newest-cni-583801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-583801 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 20:53:39.821148  232430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1126 20:53:39.821224  232430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1126 20:53:39.882517  232430 cri.go:89] found id: ""
	I1126 20:53:39.882634  232430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1126 20:53:39.891828  232430 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1126 20:53:39.891896  232430 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1126 20:53:39.891974  232430 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1126 20:53:39.908470  232430 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1126 20:53:39.909115  232430 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-583801" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:53:39.909412  232430 kubeconfig.go:62] /home/jenkins/minikube-integration/21974-2326/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-583801" cluster setting kubeconfig missing "newest-cni-583801" context setting]
	I1126 20:53:39.909916  232430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:39.911663  232430 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1126 20:53:39.927837  232430 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1126 20:53:39.927910  232430 kubeadm.go:602] duration metric: took 35.995984ms to restartPrimaryControlPlane
	I1126 20:53:39.927935  232430 kubeadm.go:403] duration metric: took 106.942909ms to StartCluster
	I1126 20:53:39.927978  232430 settings.go:142] acquiring lock: {Name:mkfa9769dd6cb90f9e6ab4e649174affc8c211c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:39.928065  232430 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:53:39.929037  232430 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/kubeconfig: {Name:mk31d3c3cd766bb0755a8ea89aea97c29670aa46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:39.929289  232430 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:53:39.929688  232430 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:39.929683  232430 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1126 20:53:39.929835  232430 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-583801"
	I1126 20:53:39.929857  232430 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-583801"
	W1126 20:53:39.929864  232430 addons.go:248] addon storage-provisioner should already be in state true
	I1126 20:53:39.929886  232430 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:39.930468  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:39.931431  232430 addons.go:70] Setting dashboard=true in profile "newest-cni-583801"
	I1126 20:53:39.931456  232430 addons.go:239] Setting addon dashboard=true in "newest-cni-583801"
	W1126 20:53:39.931463  232430 addons.go:248] addon dashboard should already be in state true
	I1126 20:53:39.931487  232430 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:39.931941  232430 addons.go:70] Setting default-storageclass=true in profile "newest-cni-583801"
	I1126 20:53:39.932129  232430 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-583801"
	I1126 20:53:39.931948  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:39.935682  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:39.936069  232430 out.go:179] * Verifying Kubernetes components...
	I1126 20:53:39.946487  232430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1126 20:53:39.986467  232430 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1126 20:53:39.992014  232430 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:53:39.992038  232430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1126 20:53:39.992103  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:39.997474  232430 addons.go:239] Setting addon default-storageclass=true in "newest-cni-583801"
	W1126 20:53:39.997502  232430 addons.go:248] addon default-storageclass should already be in state true
	I1126 20:53:39.997527  232430 host.go:66] Checking if "newest-cni-583801" exists ...
	I1126 20:53:39.999962  232430 cli_runner.go:164] Run: docker container inspect newest-cni-583801 --format={{.State.Status}}
	I1126 20:53:40.036086  232430 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1126 20:53:40.042265  232430 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1126 20:53:40.048152  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1126 20:53:40.048197  232430 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1126 20:53:40.048290  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:40.050104  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:40.063728  232430 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1126 20:53:40.063750  232430 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1126 20:53:40.063811  232430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-583801
	I1126 20:53:40.094270  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:40.107034  232430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/newest-cni-583801/id_rsa Username:docker}
	I1126 20:53:40.301165  232430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1126 20:53:40.310971  232430 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1126 20:53:40.367032  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1126 20:53:40.367104  232430 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1126 20:53:40.417315  232430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1126 20:53:40.431253  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1126 20:53:40.431315  232430 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1126 20:53:40.499903  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1126 20:53:40.499968  232430 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1126 20:53:40.572044  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1126 20:53:40.572108  232430 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1126 20:53:40.627747  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1126 20:53:40.627818  232430 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1126 20:53:40.670335  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1126 20:53:40.670406  232430 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1126 20:53:40.693837  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1126 20:53:40.693908  232430 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1126 20:53:40.718979  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1126 20:53:40.719049  232430 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1126 20:53:40.745578  232430 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:53:40.745659  232430 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1126 20:53:40.766507  232430 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1126 20:53:49.061232  232430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.760022983s)
	I1126 20:53:49.061319  232430 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.750328431s)
	I1126 20:53:49.061346  232430 api_server.go:52] waiting for apiserver process to appear ...
	I1126 20:53:49.061404  232430 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:53:49.061468  232430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.64408306s)
	I1126 20:53:49.282327  232430 api_server.go:72] duration metric: took 9.352983796s to wait for apiserver process to appear ...
	I1126 20:53:49.282351  232430 api_server.go:88] waiting for apiserver healthz status ...
	I1126 20:53:49.282368  232430 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:53:49.283534  232430 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.516938484s)
	I1126 20:53:49.286516  232430 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-583801 addons enable metrics-server
	
	I1126 20:53:49.290067  232430 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1126 20:53:49.292953  232430 addons.go:530] duration metric: took 9.363276181s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1126 20:53:49.305793  232430 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1126 20:53:49.305819  232430 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1126 20:53:49.783274  232430 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1126 20:53:49.795311  232430 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1126 20:53:49.797765  232430 api_server.go:141] control plane version: v1.34.1
	I1126 20:53:49.797791  232430 api_server.go:131] duration metric: took 515.433715ms to wait for apiserver health ...
	I1126 20:53:49.797800  232430 system_pods.go:43] waiting for kube-system pods to appear ...
	I1126 20:53:49.804404  232430 system_pods.go:59] 8 kube-system pods found
	I1126 20:53:49.804439  232430 system_pods.go:61] "coredns-66bc5c9577-jgvmh" [120d7cde-44e6-4b70-a084-5dc9aedb43a1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1126 20:53:49.804447  232430 system_pods.go:61] "etcd-newest-cni-583801" [008f5999-344a-4440-9a40-e1cbef7e635a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1126 20:53:49.804453  232430 system_pods.go:61] "kindnet-sbsft" [86669a04-b137-4030-a081-e29138539712] Running
	I1126 20:53:49.804461  232430 system_pods.go:61] "kube-apiserver-newest-cni-583801" [4a7b65d1-3d49-4c9c-b7e2-c7710ef418b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1126 20:53:49.804467  232430 system_pods.go:61] "kube-controller-manager-newest-cni-583801" [9e395a3d-9368-41db-8671-6d9e20ec9c53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1126 20:53:49.804471  232430 system_pods.go:61] "kube-proxy-gjz2x" [b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8] Running
	I1126 20:53:49.804477  232430 system_pods.go:61] "kube-scheduler-newest-cni-583801" [ddeb5080-621e-4014-815b-06844437b467] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1126 20:53:49.804481  232430 system_pods.go:61] "storage-provisioner" [99891d85-c274-44a1-b73d-7c21c77d320c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1126 20:53:49.804487  232430 system_pods.go:74] duration metric: took 6.681961ms to wait for pod list to return data ...
	I1126 20:53:49.804495  232430 default_sa.go:34] waiting for default service account to be created ...
	I1126 20:53:49.813031  232430 default_sa.go:45] found service account: "default"
	I1126 20:53:49.813055  232430 default_sa.go:55] duration metric: took 8.550929ms for default service account to be created ...
	I1126 20:53:49.813068  232430 kubeadm.go:587] duration metric: took 9.883730045s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1126 20:53:49.813087  232430 node_conditions.go:102] verifying NodePressure condition ...
	I1126 20:53:49.820165  232430 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1126 20:53:49.820257  232430 node_conditions.go:123] node cpu capacity is 2
	I1126 20:53:49.820284  232430 node_conditions.go:105] duration metric: took 7.19128ms to run NodePressure ...
	I1126 20:53:49.820326  232430 start.go:242] waiting for startup goroutines ...
	I1126 20:53:49.820351  232430 start.go:247] waiting for cluster config update ...
	I1126 20:53:49.820378  232430 start.go:256] writing updated cluster config ...
	I1126 20:53:49.820677  232430 ssh_runner.go:195] Run: rm -f paused
	I1126 20:53:49.938067  232430 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1126 20:53:49.941393  232430 out.go:179] * Done! kubectl is now configured to use "newest-cni-583801" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.311905942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.334408312Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=635a1a52-9872-4dd2-a92c-3aefea77d8a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.341222043Z" level=info msg="Ran pod sandbox 283b2bf9e4174765197d8a0ad89d23952362a95eaf424830ce983dbfda8dfeac with infra container: kube-system/kindnet-sbsft/POD" id=635a1a52-9872-4dd2-a92c-3aefea77d8a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.358260387Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=24523fca-edbe-4c36-9418-9abfefd12ec6 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.373267339Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9288cbf3-f319-4641-a8bd-68db59ff66e7 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.378383042Z" level=info msg="Creating container: kube-system/kindnet-sbsft/kindnet-cni" id=6ddcb9fa-6039-4fda-8830-5fda3dd8a2d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.378494842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.392018619Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.392810252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.440255136Z" level=info msg="Created container 17d091197d693185b259153ccacb33eeee4c1ba53fb28487a00287fc279ec0cb: kube-system/kindnet-sbsft/kindnet-cni" id=6ddcb9fa-6039-4fda-8830-5fda3dd8a2d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.446190426Z" level=info msg="Starting container: 17d091197d693185b259153ccacb33eeee4c1ba53fb28487a00287fc279ec0cb" id=eb34230d-f38b-48df-8789-f64e5c4fe170 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.494894876Z" level=info msg="Started container" PID=1059 containerID=17d091197d693185b259153ccacb33eeee4c1ba53fb28487a00287fc279ec0cb description=kube-system/kindnet-sbsft/kindnet-cni id=eb34230d-f38b-48df-8789-f64e5c4fe170 name=/runtime.v1.RuntimeService/StartContainer sandboxID=283b2bf9e4174765197d8a0ad89d23952362a95eaf424830ce983dbfda8dfeac
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.913529675Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-gjz2x/POD" id=47240304-4056-4073-aa8e-abfc34ce1791 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.913595897Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.927824474Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=47240304-4056-4073-aa8e-abfc34ce1791 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.941080714Z" level=info msg="Ran pod sandbox 2994862ffb28a810df7872d6a7bc67b31ad94df0247be70586e97c994773ac70 with infra container: kube-system/kube-proxy-gjz2x/POD" id=47240304-4056-4073-aa8e-abfc34ce1791 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.95246106Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=bb7af711-7cac-41ff-8e1f-0a35572ae3b8 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.954574797Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4ed916b8-fdb3-4add-80b4-a3ed5667244b name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.966922419Z" level=info msg="Creating container: kube-system/kube-proxy-gjz2x/kube-proxy" id=818d5389-5f12-4272-add3-8b37966c2290 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.967260051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:48 newest-cni-583801 crio[615]: time="2025-11-26T20:53:48.022685049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:48 newest-cni-583801 crio[615]: time="2025-11-26T20:53:48.023641789Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:48 newest-cni-583801 crio[615]: time="2025-11-26T20:53:48.294575403Z" level=info msg="Created container 3dc8c2d4b980c014ffa1491dd391550e7f5a7a93ef2c30bc9b3edaf00dc0d2b5: kube-system/kube-proxy-gjz2x/kube-proxy" id=818d5389-5f12-4272-add3-8b37966c2290 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:48 newest-cni-583801 crio[615]: time="2025-11-26T20:53:48.306559326Z" level=info msg="Starting container: 3dc8c2d4b980c014ffa1491dd391550e7f5a7a93ef2c30bc9b3edaf00dc0d2b5" id=7a116f12-cc4a-449a-9536-5b397c9276e7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:53:48 newest-cni-583801 crio[615]: time="2025-11-26T20:53:48.309298653Z" level=info msg="Started container" PID=1101 containerID=3dc8c2d4b980c014ffa1491dd391550e7f5a7a93ef2c30bc9b3edaf00dc0d2b5 description=kube-system/kube-proxy-gjz2x/kube-proxy id=7a116f12-cc4a-449a-9536-5b397c9276e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2994862ffb28a810df7872d6a7bc67b31ad94df0247be70586e97c994773ac70
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3dc8c2d4b980c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   2994862ffb28a       kube-proxy-gjz2x                            kube-system
	17d091197d693       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   283b2bf9e4174       kindnet-sbsft                               kube-system
	4cc796441637e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   490af855a20a3       kube-controller-manager-newest-cni-583801   kube-system
	44bd96cfffedf       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   fdbde481e6db0       kube-scheduler-newest-cni-583801            kube-system
	c095077f35df8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      1                   32ecbf22e8a84       etcd-newest-cni-583801                      kube-system
	e09245036c46e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            1                   944aed9604178       kube-apiserver-newest-cni-583801            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-583801
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-583801
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=newest-cni-583801
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_53_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:53:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-583801
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:53:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:53:47 +0000   Wed, 26 Nov 2025 20:53:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:53:47 +0000   Wed, 26 Nov 2025 20:53:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:53:47 +0000   Wed, 26 Nov 2025 20:53:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 26 Nov 2025 20:53:47 +0000   Wed, 26 Nov 2025 20:53:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-583801
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                e506ba8d-2f72-4740-8ae9-08bb604d173a
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-583801                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-sbsft                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-583801             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-583801    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-gjz2x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-583801             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node newest-cni-583801 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 42s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 42s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node newest-cni-583801 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node newest-cni-583801 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-583801 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-583801 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-583801 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           31s                node-controller  Node newest-cni-583801 event: Registered Node newest-cni-583801 in Controller
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x8 over 16s)  kubelet          Node newest-cni-583801 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-583801 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 16s)  kubelet          Node newest-cni-583801 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-583801 event: Registered Node newest-cni-583801 in Controller
	
	
	==> dmesg <==
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	[Nov26 20:48] overlayfs: idmapped layers are currently not supported
	[Nov26 20:49] overlayfs: idmapped layers are currently not supported
	[Nov26 20:50] overlayfs: idmapped layers are currently not supported
	[Nov26 20:51] overlayfs: idmapped layers are currently not supported
	[ +24.066506] overlayfs: idmapped layers are currently not supported
	[Nov26 20:52] overlayfs: idmapped layers are currently not supported
	[Nov26 20:53] overlayfs: idmapped layers are currently not supported
	[ +25.622621] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c095077f35df8e2656c94a79f65541cd81179ffafbefae7b7e437bf363947b4c] <==
	{"level":"warn","ts":"2025-11-26T20:53:43.793175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.829088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.844151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.869890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.897115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.922411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.966946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.996294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.032064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.101946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.103394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.124916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.138213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.163306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.181719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.205538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.252881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.281453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.318410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.374102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.397546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.437083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.460373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.589251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40892","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-26T20:53:48.190592Z","caller":"traceutil/trace.go:172","msg":"trace[1222665399] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"101.340778ms","start":"2025-11-26T20:53:48.089233Z","end":"2025-11-26T20:53:48.190574Z","steps":["trace[1222665399] 'process raft request'  (duration: 100.633672ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:53:55 up  1:36,  0 user,  load average: 4.40, 3.56, 2.78
	Linux newest-cni-583801 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [17d091197d693185b259153ccacb33eeee4c1ba53fb28487a00287fc279ec0cb] <==
	I1126 20:53:47.725873       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:53:47.730766       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:53:47.730864       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:53:47.730877       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:53:47.730891       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:53:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:53:47.967963       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:53:47.967988       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:53:47.967997       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:53:47.968280       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [e09245036c46e645d32534534df4df30de7d27d56a2594110164810bc26e056a] <==
	I1126 20:53:46.929151       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:53:46.929768       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:53:46.929781       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:53:46.929790       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:53:46.929797       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:53:46.930078       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:53:46.984240       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:53:47.020471       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:53:47.119788       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:53:48.200679       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:53:48.422501       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:53:48.707991       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:53:48.811125       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:53:49.230243       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.166.159"}
	I1126 20:53:49.267641       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.213.249"}
	E1126 20:53:51.645428       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1126 20:53:51.645606       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-11-26T20:53:51.651154Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001ae45a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1126 20:53:51.654447       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.576857ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1126 20:53:51.654681       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1126 20:53:51.668599       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="23.300034ms" method="PATCH" path="/api/v1/namespaces/kube-system/pods/etcd-newest-cni-583801/status" result=null
	I1126 20:53:53.714108       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:53:53.833638       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:53:54.187158       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:53:54.231566       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [4cc796441637eb0023f026a71d7a376933ef2ada9d9fc1eda956dc4f4f216436] <==
	I1126 20:53:53.679519       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-583801"
	I1126 20:53:53.679678       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1126 20:53:53.680002       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:53:53.680048       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:53:53.680162       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:53:53.680168       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:53:53.680173       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:53:53.689353       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:53:53.689479       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 20:53:53.689974       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 20:53:53.690011       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 20:53:53.690414       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:53:53.692337       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:53:53.693248       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:53:53.700455       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:53:53.701081       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:53:53.704602       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1126 20:53:53.710929       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:53:53.713020       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 20:53:53.715083       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:53:53.715384       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1126 20:53:53.722691       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:53:53.722734       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:53:53.729829       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:53:53.734702       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [3dc8c2d4b980c014ffa1491dd391550e7f5a7a93ef2c30bc9b3edaf00dc0d2b5] <==
	I1126 20:53:49.458107       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:53:50.559524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:53:50.664431       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:53:50.672195       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1126 20:53:50.672308       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:53:52.429078       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:53:52.429183       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:53:52.506172       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:53:52.506672       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:53:52.506859       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:53:52.508144       1 config.go:200] "Starting service config controller"
	I1126 20:53:52.508199       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:53:52.508240       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:53:52.520259       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:53:52.520360       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:53:52.520389       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:53:52.521075       1 config.go:309] "Starting node config controller"
	I1126 20:53:52.521463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:53:52.521498       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:53:52.634926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:53:52.634962       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:53:52.635000       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [44bd96cfffedfd12fecaf434158fdab836106a139ff228a697ceeaf1ca7a1314] <==
	I1126 20:53:45.142014       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:53:52.445601       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:53:52.445629       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:53:52.454960       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:53:52.455082       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1126 20:53:52.456103       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1126 20:53:52.456207       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:53:52.471257       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:53:52.471283       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:53:52.471440       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:53:52.471448       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:53:52.657465       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1126 20:53:52.672365       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:53:52.672444       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:53:42 newest-cni-583801 kubelet[736]: E1126 20:53:42.361438     736 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-583801\" not found" node="newest-cni-583801"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.058744     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-583801"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.234057     736 apiserver.go:52] "Watching apiserver"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.748824     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.834165     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86669a04-b137-4030-a081-e29138539712-xtables-lock\") pod \"kindnet-sbsft\" (UID: \"86669a04-b137-4030-a081-e29138539712\") " pod="kube-system/kindnet-sbsft"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.834226     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86669a04-b137-4030-a081-e29138539712-lib-modules\") pod \"kindnet-sbsft\" (UID: \"86669a04-b137-4030-a081-e29138539712\") " pod="kube-system/kindnet-sbsft"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.834264     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8-xtables-lock\") pod \"kube-proxy-gjz2x\" (UID: \"b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8\") " pod="kube-system/kube-proxy-gjz2x"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.834289     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/86669a04-b137-4030-a081-e29138539712-cni-cfg\") pod \"kindnet-sbsft\" (UID: \"86669a04-b137-4030-a081-e29138539712\") " pod="kube-system/kindnet-sbsft"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.834305     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8-lib-modules\") pod \"kube-proxy-gjz2x\" (UID: \"b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8\") " pod="kube-system/kube-proxy-gjz2x"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: E1126 20:53:46.861711     736 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-583801\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-583801' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.119255     736 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.119408     736 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.119442     736 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.120174     736 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.122330     736 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: E1126 20:53:47.151549     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-583801\" already exists" pod="kube-system/etcd-newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.151584     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: E1126 20:53:47.221007     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-583801\" already exists" pod="kube-system/kube-apiserver-newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.221041     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: E1126 20:53:47.309751     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-583801\" already exists" pod="kube-system/kube-controller-manager-newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.309790     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: E1126 20:53:47.381124     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-583801\" already exists" pod="kube-system/kube-scheduler-newest-cni-583801"
	Nov 26 20:53:51 newest-cni-583801 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:53:51 newest-cni-583801 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:53:51 newest-cni-583801 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-583801 -n newest-cni-583801
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-583801 -n newest-cni-583801: exit status 2 (438.687132ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-583801 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-jgvmh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-ghd2s kubernetes-dashboard-855c9754f9-54nm8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-583801 describe pod coredns-66bc5c9577-jgvmh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-ghd2s kubernetes-dashboard-855c9754f9-54nm8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-583801 describe pod coredns-66bc5c9577-jgvmh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-ghd2s kubernetes-dashboard-855c9754f9-54nm8: exit status 1 (113.868439ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-jgvmh" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-ghd2s" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-54nm8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-583801 describe pod coredns-66bc5c9577-jgvmh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-ghd2s kubernetes-dashboard-855c9754f9-54nm8: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-583801
helpers_test.go:243: (dbg) docker inspect newest-cni-583801:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c",
	        "Created": "2025-11-26T20:52:53.985671529Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 232561,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-26T20:53:32.336682851Z",
	            "FinishedAt": "2025-11-26T20:53:31.462991106Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c/hosts",
	        "LogPath": "/var/lib/docker/containers/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c/c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c-json.log",
	        "Name": "/newest-cni-583801",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-583801:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-583801",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c96a716e290f62da955b97883ff3f23f40748baca13d00c4462c5517ccd5e09c",
	                "LowerDir": "/var/lib/docker/overlay2/f23a4729fa6ded3a1a8ccc66cde534e546b45b2bd8d04f55047b513a2d3a9186-init/diff:/var/lib/docker/overlay2/3a1bb6e86b241e6f18c70382297fe77231df431eb3db13a25905602860359c70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f23a4729fa6ded3a1a8ccc66cde534e546b45b2bd8d04f55047b513a2d3a9186/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f23a4729fa6ded3a1a8ccc66cde534e546b45b2bd8d04f55047b513a2d3a9186/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f23a4729fa6ded3a1a8ccc66cde534e546b45b2bd8d04f55047b513a2d3a9186/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-583801",
	                "Source": "/var/lib/docker/volumes/newest-cni-583801/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-583801",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-583801",
	                "name.minikube.sigs.k8s.io": "newest-cni-583801",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "170a7eda8bf6e0b36d4a4e371a61ad2b6ca16418ee67f4f54df97cb757c81de8",
	            "SandboxKey": "/var/run/docker/netns/170a7eda8bf6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-583801": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:5e:9c:73:47:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e35a642217b331a1c1ac5d84616493887df16b6946bf83ba7ad44b2d7f7799d7",
	                    "EndpointID": "f6133aeb84521e4dcafb3e8fe54ac34c7650b73f55531ba70ff98743d797646a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-583801",
	                        "c96a716e290f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-583801 -n newest-cni-583801
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-583801 -n newest-cni-583801: exit status 2 (459.320933ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-583801 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-583801 logs -n 25: (1.272758895s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:50 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-616586 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │                     │
	│ stop    │ -p embed-certs-616586 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ addons  │ enable dashboard -p embed-certs-616586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:51 UTC │
	│ start   │ -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:51 UTC │ 26 Nov 25 20:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-538119 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-538119 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ image   │ embed-certs-616586 image list --format=json                                                                                                                                                                                                   │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ pause   │ -p embed-certs-616586 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-538119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ start   │ -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:53 UTC │
	│ delete  │ -p embed-certs-616586                                                                                                                                                                                                                         │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ delete  │ -p embed-certs-616586                                                                                                                                                                                                                         │ embed-certs-616586           │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:52 UTC │
	│ start   │ -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:52 UTC │ 26 Nov 25 20:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-583801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	│ stop    │ -p newest-cni-583801 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-583801 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ start   │ -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ image   │ default-k8s-diff-port-538119 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ pause   │ -p default-k8s-diff-port-538119 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	│ image   │ newest-cni-583801 image list --format=json                                                                                                                                                                                                    │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ pause   │ -p newest-cni-583801 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-583801            │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-538119                                                                                                                                                                                                               │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ delete  │ -p default-k8s-diff-port-538119                                                                                                                                                                                                               │ default-k8s-diff-port-538119 │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │ 26 Nov 25 20:53 UTC │
	│ start   │ -p auto-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-235709                  │ jenkins │ v1.37.0 │ 26 Nov 25 20:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 20:53:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 20:53:55.791398  236131 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:53:55.791549  236131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:53:55.791561  236131 out.go:374] Setting ErrFile to fd 2...
	I1126 20:53:55.791568  236131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:53:55.791820  236131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:53:55.792202  236131 out.go:368] Setting JSON to false
	I1126 20:53:55.793089  236131 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5766,"bootTime":1764184670,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:53:55.793156  236131 start.go:143] virtualization:  
	I1126 20:53:55.796578  236131 out.go:179] * [auto-235709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:53:55.799524  236131 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:53:55.799661  236131 notify.go:221] Checking for updates...
	I1126 20:53:55.805331  236131 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:53:55.808213  236131 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:53:55.811218  236131 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:53:55.814073  236131 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:53:55.817058  236131 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:53:55.821325  236131 config.go:182] Loaded profile config "newest-cni-583801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:53:55.821429  236131 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:53:55.873469  236131 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:53:55.873598  236131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:53:55.954653  236131 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:53:55.941395475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:53:55.954755  236131 docker.go:319] overlay module found
	I1126 20:53:55.957984  236131 out.go:179] * Using the docker driver based on user configuration
	I1126 20:53:55.960960  236131 start.go:309] selected driver: docker
	I1126 20:53:55.960986  236131 start.go:927] validating driver "docker" against <nil>
	I1126 20:53:55.961001  236131 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:53:55.961671  236131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:53:56.055635  236131 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 20:53:56.042759955 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:53:56.055785  236131 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 20:53:56.056004  236131 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1126 20:53:56.058887  236131 out.go:179] * Using Docker driver with root privileges
	I1126 20:53:56.061683  236131 cni.go:84] Creating CNI manager for ""
	I1126 20:53:56.061752  236131 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 20:53:56.061765  236131 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 20:53:56.061856  236131 start.go:353] cluster config:
	{Name:auto-235709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-235709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1126 20:53:56.064992  236131 out.go:179] * Starting "auto-235709" primary control-plane node in "auto-235709" cluster
	I1126 20:53:56.067886  236131 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 20:53:56.070824  236131 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1126 20:53:56.073595  236131 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 20:53:56.073648  236131 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 20:53:56.073664  236131 cache.go:65] Caching tarball of preloaded images
	I1126 20:53:56.073769  236131 preload.go:238] Found /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1126 20:53:56.073787  236131 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1126 20:53:56.073900  236131 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/config.json ...
	I1126 20:53:56.074007  236131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/config.json: {Name:mk697d4faf37fa02700f5fcf39163cda8908111d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1126 20:53:56.074182  236131 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 20:53:56.106347  236131 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1126 20:53:56.106372  236131 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1126 20:53:56.106388  236131 cache.go:243] Successfully downloaded all kic artifacts
	I1126 20:53:56.106472  236131 start.go:360] acquireMachinesLock for auto-235709: {Name:mk7b5428e3c56eceebfc3762484765947a869a84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1126 20:53:56.106611  236131 start.go:364] duration metric: took 113.399µs to acquireMachinesLock for "auto-235709"
	I1126 20:53:56.106642  236131 start.go:93] Provisioning new machine with config: &{Name:auto-235709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-235709 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1126 20:53:56.106715  236131 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.311905942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.334408312Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=635a1a52-9872-4dd2-a92c-3aefea77d8a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.341222043Z" level=info msg="Ran pod sandbox 283b2bf9e4174765197d8a0ad89d23952362a95eaf424830ce983dbfda8dfeac with infra container: kube-system/kindnet-sbsft/POD" id=635a1a52-9872-4dd2-a92c-3aefea77d8a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.358260387Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=24523fca-edbe-4c36-9418-9abfefd12ec6 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.373267339Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9288cbf3-f319-4641-a8bd-68db59ff66e7 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.378383042Z" level=info msg="Creating container: kube-system/kindnet-sbsft/kindnet-cni" id=6ddcb9fa-6039-4fda-8830-5fda3dd8a2d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.378494842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.392018619Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.392810252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.440255136Z" level=info msg="Created container 17d091197d693185b259153ccacb33eeee4c1ba53fb28487a00287fc279ec0cb: kube-system/kindnet-sbsft/kindnet-cni" id=6ddcb9fa-6039-4fda-8830-5fda3dd8a2d3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.446190426Z" level=info msg="Starting container: 17d091197d693185b259153ccacb33eeee4c1ba53fb28487a00287fc279ec0cb" id=eb34230d-f38b-48df-8789-f64e5c4fe170 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.494894876Z" level=info msg="Started container" PID=1059 containerID=17d091197d693185b259153ccacb33eeee4c1ba53fb28487a00287fc279ec0cb description=kube-system/kindnet-sbsft/kindnet-cni id=eb34230d-f38b-48df-8789-f64e5c4fe170 name=/runtime.v1.RuntimeService/StartContainer sandboxID=283b2bf9e4174765197d8a0ad89d23952362a95eaf424830ce983dbfda8dfeac
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.913529675Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-gjz2x/POD" id=47240304-4056-4073-aa8e-abfc34ce1791 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.913595897Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.927824474Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=47240304-4056-4073-aa8e-abfc34ce1791 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.941080714Z" level=info msg="Ran pod sandbox 2994862ffb28a810df7872d6a7bc67b31ad94df0247be70586e97c994773ac70 with infra container: kube-system/kube-proxy-gjz2x/POD" id=47240304-4056-4073-aa8e-abfc34ce1791 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.95246106Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=bb7af711-7cac-41ff-8e1f-0a35572ae3b8 name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.954574797Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4ed916b8-fdb3-4add-80b4-a3ed5667244b name=/runtime.v1.ImageService/ImageStatus
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.966922419Z" level=info msg="Creating container: kube-system/kube-proxy-gjz2x/kube-proxy" id=818d5389-5f12-4272-add3-8b37966c2290 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:47 newest-cni-583801 crio[615]: time="2025-11-26T20:53:47.967260051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:48 newest-cni-583801 crio[615]: time="2025-11-26T20:53:48.022685049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:48 newest-cni-583801 crio[615]: time="2025-11-26T20:53:48.023641789Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 26 20:53:48 newest-cni-583801 crio[615]: time="2025-11-26T20:53:48.294575403Z" level=info msg="Created container 3dc8c2d4b980c014ffa1491dd391550e7f5a7a93ef2c30bc9b3edaf00dc0d2b5: kube-system/kube-proxy-gjz2x/kube-proxy" id=818d5389-5f12-4272-add3-8b37966c2290 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 26 20:53:48 newest-cni-583801 crio[615]: time="2025-11-26T20:53:48.306559326Z" level=info msg="Starting container: 3dc8c2d4b980c014ffa1491dd391550e7f5a7a93ef2c30bc9b3edaf00dc0d2b5" id=7a116f12-cc4a-449a-9536-5b397c9276e7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 26 20:53:48 newest-cni-583801 crio[615]: time="2025-11-26T20:53:48.309298653Z" level=info msg="Started container" PID=1101 containerID=3dc8c2d4b980c014ffa1491dd391550e7f5a7a93ef2c30bc9b3edaf00dc0d2b5 description=kube-system/kube-proxy-gjz2x/kube-proxy id=7a116f12-cc4a-449a-9536-5b397c9276e7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2994862ffb28a810df7872d6a7bc67b31ad94df0247be70586e97c994773ac70
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3dc8c2d4b980c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   9 seconds ago       Running             kube-proxy                1                   2994862ffb28a       kube-proxy-gjz2x                            kube-system
	17d091197d693       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   10 seconds ago      Running             kindnet-cni               1                   283b2bf9e4174       kindnet-sbsft                               kube-system
	4cc796441637e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   17 seconds ago      Running             kube-controller-manager   1                   490af855a20a3       kube-controller-manager-newest-cni-583801   kube-system
	44bd96cfffedf       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   17 seconds ago      Running             kube-scheduler            1                   fdbde481e6db0       kube-scheduler-newest-cni-583801            kube-system
	c095077f35df8       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   17 seconds ago      Running             etcd                      1                   32ecbf22e8a84       etcd-newest-cni-583801                      kube-system
	e09245036c46e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   17 seconds ago      Running             kube-apiserver            1                   944aed9604178       kube-apiserver-newest-cni-583801            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-583801
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-583801
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1f9f533a52cbc43a7fc74d1e77b7e9da93c5d970
	                    minikube.k8s.io/name=newest-cni-583801
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_26T20_53_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Nov 2025 20:53:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-583801
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Nov 2025 20:53:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Nov 2025 20:53:47 +0000   Wed, 26 Nov 2025 20:53:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Nov 2025 20:53:47 +0000   Wed, 26 Nov 2025 20:53:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Nov 2025 20:53:47 +0000   Wed, 26 Nov 2025 20:53:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 26 Nov 2025 20:53:47 +0000   Wed, 26 Nov 2025 20:53:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-583801
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd56ca808394105f594af1d1692718f7
	  System UUID:                e506ba8d-2f72-4740-8ae9-08bb604d173a
	  Boot ID:                    486ac1e4-7398-4de0-aac9-858aafe3bfc5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-583801                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         37s
	  kube-system                 kindnet-sbsft                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-newest-cni-583801             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-newest-cni-583801    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-gjz2x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-newest-cni-583801             100m (5%)     0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 30s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node newest-cni-583801 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 44s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-583801 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node newest-cni-583801 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     37s                kubelet          Node newest-cni-583801 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 37s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  37s                kubelet          Node newest-cni-583801 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    37s                kubelet          Node newest-cni-583801 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 37s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           33s                node-controller  Node newest-cni-583801 event: Registered Node newest-cni-583801 in Controller
	  Normal   Starting                 18s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node newest-cni-583801 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node newest-cni-583801 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18s (x8 over 18s)  kubelet          Node newest-cni-583801 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-583801 event: Registered Node newest-cni-583801 in Controller
	
	
	==> dmesg <==
	[ +19.121169] overlayfs: idmapped layers are currently not supported
	[Nov26 20:28] overlayfs: idmapped layers are currently not supported
	[ +26.208465] overlayfs: idmapped layers are currently not supported
	[Nov26 20:29] overlayfs: idmapped layers are currently not supported
	[ +27.162994] overlayfs: idmapped layers are currently not supported
	[Nov26 20:31] overlayfs: idmapped layers are currently not supported
	[Nov26 20:32] overlayfs: idmapped layers are currently not supported
	[Nov26 20:34] overlayfs: idmapped layers are currently not supported
	[Nov26 20:35] overlayfs: idmapped layers are currently not supported
	[Nov26 20:36] overlayfs: idmapped layers are currently not supported
	[Nov26 20:41] overlayfs: idmapped layers are currently not supported
	[Nov26 20:43] overlayfs: idmapped layers are currently not supported
	[Nov26 20:44] overlayfs: idmapped layers are currently not supported
	[  +6.603561] overlayfs: idmapped layers are currently not supported
	[Nov26 20:45] overlayfs: idmapped layers are currently not supported
	[ +36.450367] overlayfs: idmapped layers are currently not supported
	[Nov26 20:47] overlayfs: idmapped layers are currently not supported
	[Nov26 20:48] overlayfs: idmapped layers are currently not supported
	[Nov26 20:49] overlayfs: idmapped layers are currently not supported
	[Nov26 20:50] overlayfs: idmapped layers are currently not supported
	[Nov26 20:51] overlayfs: idmapped layers are currently not supported
	[ +24.066506] overlayfs: idmapped layers are currently not supported
	[Nov26 20:52] overlayfs: idmapped layers are currently not supported
	[Nov26 20:53] overlayfs: idmapped layers are currently not supported
	[ +25.622621] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c095077f35df8e2656c94a79f65541cd81179ffafbefae7b7e437bf363947b4c] <==
	{"level":"warn","ts":"2025-11-26T20:53:43.793175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.829088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.844151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.869890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.897115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.922411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.966946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:43.996294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.032064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.101946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.103394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.124916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.138213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.163306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.181719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.205538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.252881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.281453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.318410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.374102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.397546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.437083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.460373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-26T20:53:44.589251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40892","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-26T20:53:48.190592Z","caller":"traceutil/trace.go:172","msg":"trace[1222665399] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"101.340778ms","start":"2025-11-26T20:53:48.089233Z","end":"2025-11-26T20:53:48.190574Z","steps":["trace[1222665399] 'process raft request'  (duration: 100.633672ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:53:57 up  1:36,  0 user,  load average: 4.40, 3.56, 2.78
	Linux newest-cni-583801 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [17d091197d693185b259153ccacb33eeee4c1ba53fb28487a00287fc279ec0cb] <==
	I1126 20:53:47.725873       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1126 20:53:47.730766       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1126 20:53:47.730864       1 main.go:148] setting mtu 1500 for CNI 
	I1126 20:53:47.730877       1 main.go:178] kindnetd IP family: "ipv4"
	I1126 20:53:47.730891       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-26T20:53:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1126 20:53:47.967963       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1126 20:53:47.967988       1 controller.go:381] "Waiting for informer caches to sync"
	I1126 20:53:47.967997       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1126 20:53:47.968280       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [e09245036c46e645d32534534df4df30de7d27d56a2594110164810bc26e056a] <==
	I1126 20:53:46.929151       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1126 20:53:46.929768       1 aggregator.go:171] initial CRD sync complete...
	I1126 20:53:46.929781       1 autoregister_controller.go:144] Starting autoregister controller
	I1126 20:53:46.929790       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1126 20:53:46.929797       1 cache.go:39] Caches are synced for autoregister controller
	I1126 20:53:46.930078       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1126 20:53:46.984240       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1126 20:53:47.020471       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1126 20:53:47.119788       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1126 20:53:48.200679       1 controller.go:667] quota admission added evaluator for: namespaces
	I1126 20:53:48.422501       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1126 20:53:48.707991       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1126 20:53:48.811125       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1126 20:53:49.230243       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.166.159"}
	I1126 20:53:49.267641       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.213.249"}
	E1126 20:53:51.645428       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1126 20:53:51.645606       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-11-26T20:53:51.651154Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0x4001ae45a0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E1126 20:53:51.654447       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.576857ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1126 20:53:51.654681       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1126 20:53:51.668599       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="23.300034ms" method="PATCH" path="/api/v1/namespaces/kube-system/pods/etcd-newest-cni-583801/status" result=null
	I1126 20:53:53.714108       1 controller.go:667] quota admission added evaluator for: endpoints
	I1126 20:53:53.833638       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1126 20:53:54.187158       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1126 20:53:54.231566       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [4cc796441637eb0023f026a71d7a376933ef2ada9d9fc1eda956dc4f4f216436] <==
	I1126 20:53:53.679519       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-583801"
	I1126 20:53:53.679678       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1126 20:53:53.680002       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1126 20:53:53.680048       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1126 20:53:53.680162       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1126 20:53:53.680168       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1126 20:53:53.680173       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1126 20:53:53.689353       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1126 20:53:53.689479       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1126 20:53:53.689974       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1126 20:53:53.690011       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1126 20:53:53.690414       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1126 20:53:53.692337       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1126 20:53:53.693248       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1126 20:53:53.700455       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1126 20:53:53.701081       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1126 20:53:53.704602       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1126 20:53:53.710929       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1126 20:53:53.713020       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1126 20:53:53.715083       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1126 20:53:53.715384       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1126 20:53:53.722691       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1126 20:53:53.722734       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1126 20:53:53.729829       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1126 20:53:53.734702       1 shared_informer.go:356] "Caches are synced" controller="job"
	
	
	==> kube-proxy [3dc8c2d4b980c014ffa1491dd391550e7f5a7a93ef2c30bc9b3edaf00dc0d2b5] <==
	I1126 20:53:49.458107       1 server_linux.go:53] "Using iptables proxy"
	I1126 20:53:50.559524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1126 20:53:50.664431       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1126 20:53:50.672195       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1126 20:53:50.672308       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1126 20:53:52.429078       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1126 20:53:52.429183       1 server_linux.go:132] "Using iptables Proxier"
	I1126 20:53:52.506172       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1126 20:53:52.506672       1 server.go:527] "Version info" version="v1.34.1"
	I1126 20:53:52.506859       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:53:52.508144       1 config.go:200] "Starting service config controller"
	I1126 20:53:52.508199       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1126 20:53:52.508240       1 config.go:106] "Starting endpoint slice config controller"
	I1126 20:53:52.520259       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1126 20:53:52.520360       1 config.go:403] "Starting serviceCIDR config controller"
	I1126 20:53:52.520389       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1126 20:53:52.521075       1 config.go:309] "Starting node config controller"
	I1126 20:53:52.521463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1126 20:53:52.521498       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1126 20:53:52.634926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1126 20:53:52.634962       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1126 20:53:52.635000       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [44bd96cfffedfd12fecaf434158fdab836106a139ff228a697ceeaf1ca7a1314] <==
	I1126 20:53:45.142014       1 serving.go:386] Generated self-signed cert in-memory
	I1126 20:53:52.445601       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1126 20:53:52.445629       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1126 20:53:52.454960       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1126 20:53:52.455082       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1126 20:53:52.456103       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1126 20:53:52.456207       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1126 20:53:52.471257       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:53:52.471283       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1126 20:53:52.471440       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:53:52.471448       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:53:52.657465       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1126 20:53:52.672365       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1126 20:53:52.672444       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 26 20:53:42 newest-cni-583801 kubelet[736]: E1126 20:53:42.361438     736 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-583801\" not found" node="newest-cni-583801"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.058744     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-583801"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.234057     736 apiserver.go:52] "Watching apiserver"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.748824     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.834165     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86669a04-b137-4030-a081-e29138539712-xtables-lock\") pod \"kindnet-sbsft\" (UID: \"86669a04-b137-4030-a081-e29138539712\") " pod="kube-system/kindnet-sbsft"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.834226     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86669a04-b137-4030-a081-e29138539712-lib-modules\") pod \"kindnet-sbsft\" (UID: \"86669a04-b137-4030-a081-e29138539712\") " pod="kube-system/kindnet-sbsft"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.834264     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8-xtables-lock\") pod \"kube-proxy-gjz2x\" (UID: \"b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8\") " pod="kube-system/kube-proxy-gjz2x"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.834289     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/86669a04-b137-4030-a081-e29138539712-cni-cfg\") pod \"kindnet-sbsft\" (UID: \"86669a04-b137-4030-a081-e29138539712\") " pod="kube-system/kindnet-sbsft"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: I1126 20:53:46.834305     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8-lib-modules\") pod \"kube-proxy-gjz2x\" (UID: \"b434ebf3-c1e3-4e4c-9c74-3e2b1cd640e8\") " pod="kube-system/kube-proxy-gjz2x"
	Nov 26 20:53:46 newest-cni-583801 kubelet[736]: E1126 20:53:46.861711     736 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-583801\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-583801' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.119255     736 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.119408     736 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.119442     736 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.120174     736 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.122330     736 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: E1126 20:53:47.151549     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-583801\" already exists" pod="kube-system/etcd-newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.151584     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: E1126 20:53:47.221007     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-583801\" already exists" pod="kube-system/kube-apiserver-newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.221041     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: E1126 20:53:47.309751     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-583801\" already exists" pod="kube-system/kube-controller-manager-newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: I1126 20:53:47.309790     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-583801"
	Nov 26 20:53:47 newest-cni-583801 kubelet[736]: E1126 20:53:47.381124     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-583801\" already exists" pod="kube-system/kube-scheduler-newest-cni-583801"
	Nov 26 20:53:51 newest-cni-583801 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 26 20:53:51 newest-cni-583801 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 26 20:53:51 newest-cni-583801 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-583801 -n newest-cni-583801
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-583801 -n newest-cni-583801: exit status 2 (435.514589ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-583801 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-jgvmh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-ghd2s kubernetes-dashboard-855c9754f9-54nm8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-583801 describe pod coredns-66bc5c9577-jgvmh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-ghd2s kubernetes-dashboard-855c9754f9-54nm8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-583801 describe pod coredns-66bc5c9577-jgvmh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-ghd2s kubernetes-dashboard-855c9754f9-54nm8: exit status 1 (105.015965ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-jgvmh" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-ghd2s" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-54nm8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-583801 describe pod coredns-66bc5c9577-jgvmh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-ghd2s kubernetes-dashboard-855c9754f9-54nm8: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.83s)
E1126 21:00:01.522022    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:08.431497    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:08.437883    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:08.449348    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:08.470752    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:08.512112    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:08.593528    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:08.755177    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:09.076817    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:09.718945    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:11.000913    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:13.562237    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:18.684173    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:21.872288    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:21.878601    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:21.889908    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:21.911242    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:21.952597    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:22.033975    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:22.195551    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:22.517255    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:23.158843    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:24.440692    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:27.002701    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:28.925617    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 21:00:32.124423    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (257/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 33.86
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 39.19
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.2
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.55
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 163.37
31 TestAddons/serial/GCPAuth/Namespaces 0.22
32 TestAddons/serial/GCPAuth/FakeCredentials 8.85
48 TestAddons/StoppedEnableDisable 12.51
49 TestCertOptions 40.57
50 TestCertExpiration 332.58
52 TestForceSystemdFlag 43.47
53 TestForceSystemdEnv 43.94
58 TestErrorSpam/setup 32.92
59 TestErrorSpam/start 0.81
60 TestErrorSpam/status 1.1
61 TestErrorSpam/pause 6.48
62 TestErrorSpam/unpause 6.08
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 83.59
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 43.21
70 TestFunctional/serial/KubeContext 0.08
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
75 TestFunctional/serial/CacheCmd/cache/add_local 1.03
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.81
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 32.97
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.49
86 TestFunctional/serial/LogsFileCmd 1.45
87 TestFunctional/serial/InvalidService 4.09
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 9.08
91 TestFunctional/parallel/DryRun 0.43
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.03
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 46.65
101 TestFunctional/parallel/SSHCmd 0.77
102 TestFunctional/parallel/CpCmd 1.75
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.7
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
113 TestFunctional/parallel/License 0.33
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.32
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.41
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
129 TestFunctional/parallel/MountCmd/any-port 6.78
130 TestFunctional/parallel/MountCmd/specific-port 1.86
131 TestFunctional/parallel/MountCmd/VerifyCleanup 2.03
132 TestFunctional/parallel/Version/short 0.11
133 TestFunctional/parallel/Version/components 0.87
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
138 TestFunctional/parallel/ImageCommands/ImageBuild 4.29
139 TestFunctional/parallel/ImageCommands/Setup 0.65
144 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
150 TestFunctional/parallel/ServiceCmd/List 1.34
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.36
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 204
163 TestMultiControlPlane/serial/DeployApp 7.07
164 TestMultiControlPlane/serial/PingHostFromPods 1.47
165 TestMultiControlPlane/serial/AddWorkerNode 59.24
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 19.54
169 TestMultiControlPlane/serial/StopSecondaryNode 12.84
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 33.91
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.24
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 128.31
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.83
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
176 TestMultiControlPlane/serial/StopCluster 36.14
185 TestJSONOutput/start/Command 82.62
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.81
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 44.88
211 TestKicCustomNetwork/use_default_bridge_network 34.4
212 TestKicExistingNetwork 34.98
213 TestKicCustomSubnet 39.17
214 TestKicStaticIP 36.73
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 71.64
219 TestMountStart/serial/StartWithMountFirst 8.54
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.85
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.31
226 TestMountStart/serial/RestartStopped 7.78
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 136.4
231 TestMultiNode/serial/DeployApp2Nodes 4.95
232 TestMultiNode/serial/PingHostFrom2Pods 0.9
233 TestMultiNode/serial/AddNode 58.02
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.24
237 TestMultiNode/serial/StopNode 2.44
238 TestMultiNode/serial/StartAfterStop 7.97
239 TestMultiNode/serial/RestartKeepsNodes 82.57
240 TestMultiNode/serial/DeleteNode 5.65
241 TestMultiNode/serial/StopMultiNode 23.93
242 TestMultiNode/serial/RestartMultiNode 46.99
243 TestMultiNode/serial/ValidateNameConflict 37.43
248 TestPreload 124.66
250 TestScheduledStopUnix 109.86
253 TestInsufficientStorage 12.96
254 TestRunningBinaryUpgrade 316.62
256 TestKubernetesUpgrade 193.46
257 TestMissingContainerUpgrade 117.38
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 44.36
261 TestNoKubernetes/serial/StartWithStopK8s 18.1
262 TestNoKubernetes/serial/Start 8.75
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
265 TestNoKubernetes/serial/ProfileList 0.68
266 TestNoKubernetes/serial/Stop 1.28
267 TestNoKubernetes/serial/StartNoArgs 7.53
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
269 TestStoppedBinaryUpgrade/Setup 1.74
270 TestStoppedBinaryUpgrade/Upgrade 313.36
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.33
280 TestPause/serial/Start 84.95
281 TestPause/serial/SecondStartNoReconfiguration 41.52
290 TestNetworkPlugins/group/false 5.8
295 TestStartStop/group/old-k8s-version/serial/FirstStart 61.24
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.41
298 TestStartStop/group/old-k8s-version/serial/Stop 12.01
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
300 TestStartStop/group/old-k8s-version/serial/SecondStart 53.08
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.28
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
306 TestStartStop/group/no-preload/serial/FirstStart 61.71
307 TestStartStop/group/no-preload/serial/DeployApp 8.32
309 TestStartStop/group/no-preload/serial/Stop 11.99
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
311 TestStartStop/group/no-preload/serial/SecondStart 59.4
313 TestStartStop/group/embed-certs/serial/FirstStart 82.46
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.19
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.26
320 TestStartStop/group/embed-certs/serial/DeployApp 10.42
322 TestStartStop/group/embed-certs/serial/Stop 12.63
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
324 TestStartStop/group/embed-certs/serial/SecondStart 50.42
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.36
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.08
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.67
335 TestStartStop/group/newest-cni/serial/FirstStart 40.65
336 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/Stop 1.42
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
341 TestStartStop/group/newest-cni/serial/SecondStart 18.49
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
349 TestNetworkPlugins/group/auto/Start 85.59
350 TestNetworkPlugins/group/flannel/Start 65.34
351 TestNetworkPlugins/group/flannel/ControllerPod 6.01
352 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
353 TestNetworkPlugins/group/flannel/NetCatPod 9.3
354 TestNetworkPlugins/group/auto/KubeletFlags 0.29
355 TestNetworkPlugins/group/auto/NetCatPod 11.27
356 TestNetworkPlugins/group/flannel/DNS 0.22
357 TestNetworkPlugins/group/flannel/Localhost 0.18
358 TestNetworkPlugins/group/flannel/HairPin 0.18
359 TestNetworkPlugins/group/auto/DNS 0.24
360 TestNetworkPlugins/group/auto/Localhost 0.19
361 TestNetworkPlugins/group/auto/HairPin 0.17
362 TestNetworkPlugins/group/calico/Start 83.2
363 TestNetworkPlugins/group/custom-flannel/Start 70.42
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.36
366 TestNetworkPlugins/group/calico/ControllerPod 6
367 TestNetworkPlugins/group/calico/KubeletFlags 0.3
368 TestNetworkPlugins/group/calico/NetCatPod 10.36
369 TestNetworkPlugins/group/custom-flannel/DNS 0.23
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
372 TestNetworkPlugins/group/calico/DNS 0.21
373 TestNetworkPlugins/group/calico/Localhost 0.18
374 TestNetworkPlugins/group/calico/HairPin 0.19
375 TestNetworkPlugins/group/kindnet/Start 88.78
376 TestNetworkPlugins/group/bridge/Start 78.65
377 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
379 TestNetworkPlugins/group/bridge/NetCatPod 10.27
380 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
381 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
382 TestNetworkPlugins/group/bridge/DNS 0.15
383 TestNetworkPlugins/group/bridge/Localhost 0.14
384 TestNetworkPlugins/group/bridge/HairPin 0.15
385 TestNetworkPlugins/group/kindnet/DNS 0.17
386 TestNetworkPlugins/group/kindnet/Localhost 0.12
387 TestNetworkPlugins/group/kindnet/HairPin 0.16
388 TestNetworkPlugins/group/enable-default-cni/Start 52.2
389 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
390 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.26
391 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
392 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
393 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (33.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-343127 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-343127 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (33.856196286s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (33.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1126 19:36:02.040529    4129 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1126 19:36:02.040636    4129 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-343127
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-343127: exit status 85 (87.248943ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-343127 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-343127 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:35:28
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:35:28.223959    4135 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:35:28.224128    4135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:35:28.224157    4135 out.go:374] Setting ErrFile to fd 2...
	I1126 19:35:28.224177    4135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:35:28.224427    4135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	W1126 19:35:28.224572    4135 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21974-2326/.minikube/config/config.json: open /home/jenkins/minikube-integration/21974-2326/.minikube/config/config.json: no such file or directory
	I1126 19:35:28.225015    4135 out.go:368] Setting JSON to true
	I1126 19:35:28.225762    4135 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1059,"bootTime":1764184670,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 19:35:28.225855    4135 start.go:143] virtualization:  
	I1126 19:35:28.231296    4135 out.go:99] [download-only-343127] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1126 19:35:28.231491    4135 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball: no such file or directory
	I1126 19:35:28.231607    4135 notify.go:221] Checking for updates...
	I1126 19:35:28.236336    4135 out.go:171] MINIKUBE_LOCATION=21974
	I1126 19:35:28.239665    4135 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:35:28.242786    4135 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 19:35:28.246022    4135 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 19:35:28.249186    4135 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1126 19:35:28.255149    4135 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1126 19:35:28.255409    4135 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:35:28.282509    4135 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 19:35:28.282613    4135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:35:28.684356    4135 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-26 19:35:28.675131246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 19:35:28.684460    4135 docker.go:319] overlay module found
	I1126 19:35:28.687540    4135 out.go:99] Using the docker driver based on user configuration
	I1126 19:35:28.687572    4135 start.go:309] selected driver: docker
	I1126 19:35:28.687578    4135 start.go:927] validating driver "docker" against <nil>
	I1126 19:35:28.687667    4135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:35:28.750316    4135 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-26 19:35:28.741857626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 19:35:28.750493    4135 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 19:35:28.750796    4135 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1126 19:35:28.750969    4135 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1126 19:35:28.754293    4135 out.go:171] Using Docker driver with root privileges
	I1126 19:35:28.757303    4135 cni.go:84] Creating CNI manager for ""
	I1126 19:35:28.757370    4135 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:35:28.757386    4135 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 19:35:28.757467    4135 start.go:353] cluster config:
	{Name:download-only-343127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-343127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:35:28.760469    4135 out.go:99] Starting "download-only-343127" primary control-plane node in "download-only-343127" cluster
	I1126 19:35:28.760491    4135 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 19:35:28.763434    4135 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1126 19:35:28.763475    4135 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 19:35:28.763513    4135 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 19:35:28.780254    4135 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1126 19:35:28.780439    4135 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1126 19:35:28.780533    4135 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1126 19:35:28.825342    4135 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1126 19:35:28.825365    4135 cache.go:65] Caching tarball of preloaded images
	I1126 19:35:28.825534    4135 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1126 19:35:28.828845    4135 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1126 19:35:28.828873    4135 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1126 19:35:28.923648    4135 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1126 19:35:28.923780    4135 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1126 19:35:33.932408    4135 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	
	
	* The control-plane node download-only-343127 host does not exist
	  To start a cluster, run: "minikube start -p download-only-343127"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-343127
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (39.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-163348 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-163348 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (39.185827539s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (39.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1126 19:36:41.667364    4129 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1126 19:36:41.667398    4129 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-163348
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-163348: exit status 85 (61.528128ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-343127 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-343127 │ jenkins │ v1.37.0 │ 26 Nov 25 19:35 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:36 UTC │
	│ delete  │ -p download-only-343127                                                                                                                                                   │ download-only-343127 │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │ 26 Nov 25 19:36 UTC │
	│ start   │ -o=json --download-only -p download-only-163348 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-163348 │ jenkins │ v1.37.0 │ 26 Nov 25 19:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/26 19:36:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1126 19:36:02.522577    4337 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:36:02.522819    4337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:36:02.522850    4337 out.go:374] Setting ErrFile to fd 2...
	I1126 19:36:02.522869    4337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:36:02.523173    4337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:36:02.523617    4337 out.go:368] Setting JSON to true
	I1126 19:36:02.524396    4337 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1093,"bootTime":1764184670,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 19:36:02.524490    4337 start.go:143] virtualization:  
	I1126 19:36:02.528062    4337 out.go:99] [download-only-163348] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 19:36:02.528290    4337 notify.go:221] Checking for updates...
	I1126 19:36:02.531148    4337 out.go:171] MINIKUBE_LOCATION=21974
	I1126 19:36:02.534198    4337 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:36:02.537168    4337 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 19:36:02.540169    4337 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 19:36:02.543094    4337 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1126 19:36:02.548734    4337 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1126 19:36:02.549069    4337 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:36:02.579463    4337 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 19:36:02.579567    4337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:36:02.644104    4337 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-26 19:36:02.634612868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 19:36:02.644216    4337 docker.go:319] overlay module found
	I1126 19:36:02.647011    4337 out.go:99] Using the docker driver based on user configuration
	I1126 19:36:02.647055    4337 start.go:309] selected driver: docker
	I1126 19:36:02.647062    4337 start.go:927] validating driver "docker" against <nil>
	I1126 19:36:02.647164    4337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:36:02.699328    4337 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-26 19:36:02.690488178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 19:36:02.699479    4337 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1126 19:36:02.699758    4337 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1126 19:36:02.699909    4337 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1126 19:36:02.702914    4337 out.go:171] Using Docker driver with root privileges
	I1126 19:36:02.705723    4337 cni.go:84] Creating CNI manager for ""
	I1126 19:36:02.705790    4337 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1126 19:36:02.705807    4337 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1126 19:36:02.705889    4337 start.go:353] cluster config:
	{Name:download-only-163348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-163348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:36:02.708920    4337 out.go:99] Starting "download-only-163348" primary control-plane node in "download-only-163348" cluster
	I1126 19:36:02.708945    4337 cache.go:134] Beginning downloading kic base image for docker with crio
	I1126 19:36:02.711893    4337 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1126 19:36:02.711938    4337 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:36:02.712102    4337 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1126 19:36:02.729054    4337 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1126 19:36:02.729210    4337 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1126 19:36:02.729237    4337 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1126 19:36:02.729249    4337 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1126 19:36:02.729256    4337 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1126 19:36:02.767806    4337 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1126 19:36:02.767842    4337 cache.go:65] Caching tarball of preloaded images
	I1126 19:36:02.768013    4337 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1126 19:36:02.771031    4337 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1126 19:36:02.771061    4337 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1126 19:36:02.860070    4337 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1126 19:36:02.860123    4337 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21974-2326/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-163348 host does not exist
	  To start a cluster, run: "minikube start -p download-only-163348"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-163348
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I1126 19:36:42.806913    4129 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-453571 --alsologtostderr --binary-mirror http://127.0.0.1:34029 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-453571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-453571
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-152801
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-152801: exit status 85 (64.074731ms)

                                                
                                                
-- stdout --
	* Profile "addons-152801" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-152801"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-152801
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-152801: exit status 85 (75.721897ms)

                                                
                                                
-- stdout --
	* Profile "addons-152801" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-152801"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (163.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-152801 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-152801 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m43.371221374s)
--- PASS: TestAddons/Setup (163.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-152801 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-152801 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-152801 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-152801 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ed6dec8c-673a-4208-af1f-345ca4163452] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ed6dec8c-673a-4208-af1f-345ca4163452] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003463975s
addons_test.go:694: (dbg) Run:  kubectl --context addons-152801 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-152801 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-152801 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-152801 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-152801
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-152801: (12.236874961s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-152801
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-152801
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-152801
--- PASS: TestAddons/StoppedEnableDisable (12.51s)

                                                
                                    
x
+
TestCertOptions (40.57s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-207115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-207115 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.733740576s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-207115 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-207115 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-207115 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-207115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-207115
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-207115: (2.10657483s)
--- PASS: TestCertOptions (40.57s)

                                                
                                    
x
+
TestCertExpiration (332.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-164741 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1126 20:44:28.111756    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-164741 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.592923314s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-164741 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-164741 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (1m50.021772633s)
helpers_test.go:175: Cleaning up "cert-expiration-164741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-164741
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-164741: (2.964169364s)
--- PASS: TestCertExpiration (332.58s)

                                                
                                    
x
+
TestForceSystemdFlag (43.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-622960 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-622960 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.958856379s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-622960 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-622960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-622960
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-622960: (5.098303819s)
--- PASS: TestForceSystemdFlag (43.47s)

                                                
                                    
x
+
TestForceSystemdEnv (43.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-274518 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-274518 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.028937235s)
helpers_test.go:175: Cleaning up "force-systemd-env-274518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-274518
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-274518: (2.91173128s)
--- PASS: TestForceSystemdEnv (43.94s)

                                                
                                    
x
+
TestErrorSpam/setup (32.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-688525 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-688525 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-688525 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-688525 --driver=docker  --container-runtime=crio: (32.9180412s)
--- PASS: TestErrorSpam/setup (32.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (6.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 pause: exit status 80 (1.939615381s)

                                                
                                                
-- stdout --
	* Pausing node nospam-688525 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:43:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 pause: exit status 80 (2.045593416s)

                                                
                                                
-- stdout --
	* Pausing node nospam-688525 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:43:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 pause: exit status 80 (2.489736196s)

                                                
                                                
-- stdout --
	* Pausing node nospam-688525 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:43:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.08s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 unpause: exit status 80 (1.972883941s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-688525 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:43:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 unpause: exit status 80 (2.272516578s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-688525 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:43:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 unpause: exit status 80 (1.831092916s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-688525 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-26T19:43:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.08s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 stop: (1.324341996s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-688525 --log_dir /tmp/nospam-688525 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21974-2326/.minikube/files/etc/test/nested/copy/4129/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-793215 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1126 19:44:28.111859    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:44:28.118981    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:44:28.130511    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:44:28.151864    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:44:28.193251    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:44:28.274653    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:44:28.436211    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:44:28.757837    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:44:29.399483    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:44:30.681052    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:44:33.242963    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:44:38.364270    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:44:48.605639    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 19:45:09.086969    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-793215 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m23.587661772s)
--- PASS: TestFunctional/serial/StartWithProxy (83.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1126 19:45:17.200812    4129 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-793215 --alsologtostderr -v=8
E1126 19:45:50.049560    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-793215 --alsologtostderr -v=8: (43.208371941s)
functional_test.go:678: soft start took 43.208866677s for "functional-793215" cluster.
I1126 19:46:00.409486    4129 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (43.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-793215 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-793215 cache add registry.k8s.io/pause:3.1: (1.212004701s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-793215 cache add registry.k8s.io/pause:3.3: (1.166956928s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-793215 cache add registry.k8s.io/pause:latest: (1.106988822s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-793215 /tmp/TestFunctionalserialCacheCmdcacheadd_local1341963121/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 cache add minikube-local-cache-test:functional-793215
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 cache delete minikube-local-cache-test:functional-793215
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-793215
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.007798ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 kubectl -- --context functional-793215 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-793215 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-793215 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-793215 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.97458403s)
functional_test.go:776: restart took 32.974684198s for "functional-793215" cluster.
I1126 19:46:40.692078    4129 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (32.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-793215 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-793215 logs: (1.493409964s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 logs --file /tmp/TestFunctionalserialLogsFileCmd392221570/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-793215 logs --file /tmp/TestFunctionalserialLogsFileCmd392221570/001/logs.txt: (1.447269069s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-793215 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-793215
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-793215: exit status 115 (392.402216ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31142 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-793215 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 config get cpus: exit status 14 (95.546938ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 config get cpus: exit status 14 (60.851476ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-793215 --alsologtostderr -v=1]
2025/11/26 19:57:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-793215 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 29971: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.08s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-793215 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-793215 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (191.678137ms)

                                                
                                                
-- stdout --
	* [functional-793215] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:57:16.664750   29727 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:57:16.664881   29727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:57:16.664890   29727 out.go:374] Setting ErrFile to fd 2...
	I1126 19:57:16.664895   29727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:57:16.665141   29727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:57:16.665487   29727 out.go:368] Setting JSON to false
	I1126 19:57:16.666375   29727 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2367,"bootTime":1764184670,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 19:57:16.666442   29727 start.go:143] virtualization:  
	I1126 19:57:16.671414   29727 out.go:179] * [functional-793215] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 19:57:16.674288   29727 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:57:16.674346   29727 notify.go:221] Checking for updates...
	I1126 19:57:16.680696   29727 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:57:16.683473   29727 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 19:57:16.686763   29727 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 19:57:16.689686   29727 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 19:57:16.692440   29727 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:57:16.695668   29727 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:57:16.696305   29727 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:57:16.723338   29727 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 19:57:16.723453   29727 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:57:16.784170   29727 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 19:57:16.775176018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 19:57:16.784283   29727 docker.go:319] overlay module found
	I1126 19:57:16.789048   29727 out.go:179] * Using the docker driver based on existing profile
	I1126 19:57:16.791829   29727 start.go:309] selected driver: docker
	I1126 19:57:16.791853   29727 start.go:927] validating driver "docker" against &{Name:functional-793215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-793215 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:57:16.791950   29727 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:57:16.795294   29727 out.go:203] 
	W1126 19:57:16.798193   29727 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1126 19:57:16.800961   29727 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-793215 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-793215 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-793215 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (211.779335ms)

                                                
                                                
-- stdout --
	* [functional-793215] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 19:57:16.453728   29682 out.go:360] Setting OutFile to fd 1 ...
	I1126 19:57:16.453844   29682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:57:16.453854   29682 out.go:374] Setting ErrFile to fd 2...
	I1126 19:57:16.453859   29682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 19:57:16.454247   29682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 19:57:16.454622   29682 out.go:368] Setting JSON to false
	I1126 19:57:16.455454   29682 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2367,"bootTime":1764184670,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 19:57:16.455522   29682 start.go:143] virtualization:  
	I1126 19:57:16.459104   29682 out.go:179] * [functional-793215] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1126 19:57:16.462076   29682 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 19:57:16.462223   29682 notify.go:221] Checking for updates...
	I1126 19:57:16.469655   29682 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 19:57:16.472508   29682 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 19:57:16.475331   29682 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 19:57:16.478245   29682 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 19:57:16.481096   29682 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 19:57:16.484601   29682 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 19:57:16.485227   29682 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 19:57:16.514726   29682 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 19:57:16.514833   29682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 19:57:16.592390   29682 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-26 19:57:16.582866488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 19:57:16.592502   29682 docker.go:319] overlay module found
	I1126 19:57:16.596447   29682 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1126 19:57:16.599153   29682 start.go:309] selected driver: docker
	I1126 19:57:16.599173   29682 start.go:927] validating driver "docker" against &{Name:functional-793215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-793215 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1126 19:57:16.599272   29682 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 19:57:16.603404   29682 out.go:203] 
	W1126 19:57:16.606550   29682 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1126 19:57:16.609281   29682 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [cdfa27a0-3423-43ff-bdea-ded6c65fa201] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00300334s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-793215 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-793215 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-793215 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-793215 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b0e65454-d4a5-4677-8192-184623315329] Pending
helpers_test.go:352: "sp-pod" [b0e65454-d4a5-4677-8192-184623315329] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b0e65454-d4a5-4677-8192-184623315329] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 32.00358707s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-793215 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-793215 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-793215 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [526936bd-6243-4e93-8042-0d1ebaa90a06] Pending
helpers_test.go:352: "sp-pod" [526936bd-6243-4e93-8042-0d1ebaa90a06] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [526936bd-6243-4e93-8042-0d1ebaa90a06] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003815568s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-793215 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh -n functional-793215 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 cp functional-793215:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd591759266/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh -n functional-793215 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh -n functional-793215 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4129/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "sudo cat /etc/test/nested/copy/4129/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4129.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "sudo cat /etc/ssl/certs/4129.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4129.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "sudo cat /usr/share/ca-certificates/4129.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41292.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "sudo cat /etc/ssl/certs/41292.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41292.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "sudo cat /usr/share/ca-certificates/41292.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-793215 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 ssh "sudo systemctl is-active docker": exit status 1 (289.530165ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 ssh "sudo systemctl is-active containerd": exit status 1 (271.279468ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-793215 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-793215 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-793215 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-793215 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 26491: os: process already finished
helpers_test.go:519: unable to terminate pid 26311: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-793215 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-793215 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [a9f89ca2-d7e0-454f-8fa4-4cd85e0ad0ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [a9f89ca2-d7e0-454f-8fa4-4cd85e0ad0ea] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003672936s
I1126 19:46:59.196672    4129 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-793215 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.37.16 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-793215 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "357.094833ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "51.808278ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "360.339043ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "53.466845ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-793215 /tmp/TestFunctionalparallelMountCmdany-port2916858526/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764187024707060938" to /tmp/TestFunctionalparallelMountCmdany-port2916858526/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764187024707060938" to /tmp/TestFunctionalparallelMountCmdany-port2916858526/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764187024707060938" to /tmp/TestFunctionalparallelMountCmdany-port2916858526/001/test-1764187024707060938
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.399304ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1126 19:57:05.028748    4129 retry.go:31] will retry after 360.158076ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 26 19:57 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 26 19:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 26 19:57 test-1764187024707060938
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh cat /mount-9p/test-1764187024707060938
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-793215 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c0517be5-27dd-4dec-a211-1e16634ba43a] Pending
helpers_test.go:352: "busybox-mount" [c0517be5-27dd-4dec-a211-1e16634ba43a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c0517be5-27dd-4dec-a211-1e16634ba43a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c0517be5-27dd-4dec-a211-1e16634ba43a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003793363s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-793215 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-793215 /tmp/TestFunctionalparallelMountCmdany-port2916858526/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-793215 /tmp/TestFunctionalparallelMountCmdspecific-port2976642803/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (372.768018ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1126 19:57:11.860154    4129 retry.go:31] will retry after 443.422823ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-793215 /tmp/TestFunctionalparallelMountCmdspecific-port2976642803/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 ssh "sudo umount -f /mount-9p": exit status 1 (292.205334ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-793215 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-793215 /tmp/TestFunctionalparallelMountCmdspecific-port2976642803/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-793215 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2875049040/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-793215 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2875049040/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-793215 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2875049040/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 ssh "findmnt -T" /mount1: exit status 1 (555.01907ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1126 19:57:13.908203    4129 retry.go:31] will retry after 571.486148ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-793215 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-793215 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2875049040/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-793215 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2875049040/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-793215 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2875049040/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-793215 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-793215 image ls --format short --alsologtostderr:
I1126 19:57:36.833580   31917 out.go:360] Setting OutFile to fd 1 ...
I1126 19:57:36.833713   31917 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:57:36.833725   31917 out.go:374] Setting ErrFile to fd 2...
I1126 19:57:36.833743   31917 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:57:36.834068   31917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
I1126 19:57:36.834705   31917 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:57:36.834860   31917 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:57:36.835382   31917 cli_runner.go:164] Run: docker container inspect functional-793215 --format={{.State.Status}}
I1126 19:57:36.852882   31917 ssh_runner.go:195] Run: systemctl --version
I1126 19:57:36.852939   31917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
I1126 19:57:36.869492   31917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
I1126 19:57:36.972236   31917 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-793215 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ localhost/my-image                      │ functional-793215  │ 5eff09bad0d29 │ 1.64MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-793215 image ls --format table --alsologtostderr:
I1126 19:57:41.887345   32946 out.go:360] Setting OutFile to fd 1 ...
I1126 19:57:41.887446   32946 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:57:41.887456   32946 out.go:374] Setting ErrFile to fd 2...
I1126 19:57:41.887462   32946 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:57:41.887754   32946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
I1126 19:57:41.888379   32946 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:57:41.888501   32946 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:57:41.889049   32946 cli_runner.go:164] Run: docker container inspect functional-793215 --format={{.State.Status}}
I1126 19:57:41.910872   32946 ssh_runner.go:195] Run: systemctl --version
I1126 19:57:41.910936   32946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
I1126 19:57:41.931486   32946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
I1126 19:57:42.043623   32946 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-793215 image ls --format json --alsologtostderr:
[{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a83
21e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v
20250512-df8de77b"],"size":"111333938"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b47362
8bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c2625
66636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"1b3cab8dd94c5e23fd31711e20da4d16173e56681eea57e29945f3e82296dffb","repoDigests":["docker.io/library/6230897ad6555da5df1b434b1e12090909c65ca14daf2b6c21915f40080d46a9-tmp
@sha256:1b930f9053f08f777bf20ea5233e668af720ea388ba9d727bc747083304137b0"],"repoTags":[],"size":"1638179"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"5eff09bad0d29aee7b8cb8d77a53429c96b9fd3e11022955fb581c82a8a84cad","repoDigests":["localhost/my-image@sha256:34800f5dffb925e2d38927cb126118ad29231ea1652abb78464c0db72b61a880"],"repoTags":["localhost/my-image:functional-793215"],"size":"1640791"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-793215 image ls --format json --alsologtostderr:
I1126 19:57:41.607414   32863 out.go:360] Setting OutFile to fd 1 ...
I1126 19:57:41.607567   32863 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:57:41.607574   32863 out.go:374] Setting ErrFile to fd 2...
I1126 19:57:41.607580   32863 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:57:41.607833   32863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
I1126 19:57:41.608393   32863 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:57:41.608498   32863 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:57:41.608978   32863 cli_runner.go:164] Run: docker container inspect functional-793215 --format={{.State.Status}}
I1126 19:57:41.637399   32863 ssh_runner.go:195] Run: systemctl --version
I1126 19:57:41.637460   32863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
I1126 19:57:41.659149   32863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
I1126 19:57:41.782355   32863 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-793215 image ls --format yaml --alsologtostderr:
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-793215 image ls --format yaml --alsologtostderr:
I1126 19:57:37.056380   31953 out.go:360] Setting OutFile to fd 1 ...
I1126 19:57:37.056646   31953 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:57:37.056681   31953 out.go:374] Setting ErrFile to fd 2...
I1126 19:57:37.056700   31953 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:57:37.057009   31953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
I1126 19:57:37.057753   31953 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:57:37.058012   31953 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:57:37.058627   31953 cli_runner.go:164] Run: docker container inspect functional-793215 --format={{.State.Status}}
I1126 19:57:37.083179   31953 ssh_runner.go:195] Run: systemctl --version
I1126 19:57:37.083230   31953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
I1126 19:57:37.100711   31953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
I1126 19:57:37.204733   31953 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-793215 ssh pgrep buildkitd: exit status 1 (266.731441ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image build -t localhost/my-image:functional-793215 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-793215 image build -t localhost/my-image:functional-793215 testdata/build --alsologtostderr: (3.732748172s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-793215 image build -t localhost/my-image:functional-793215 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1b3cab8dd94
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-793215
--> 5eff09bad0d
Successfully tagged localhost/my-image:functional-793215
5eff09bad0d29aee7b8cb8d77a53429c96b9fd3e11022955fb581c82a8a84cad
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-793215 image build -t localhost/my-image:functional-793215 testdata/build --alsologtostderr:
I1126 19:57:37.602930   32053 out.go:360] Setting OutFile to fd 1 ...
I1126 19:57:37.603153   32053 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:57:37.603166   32053 out.go:374] Setting ErrFile to fd 2...
I1126 19:57:37.603172   32053 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1126 19:57:37.603460   32053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
I1126 19:57:37.604123   32053 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:57:37.604856   32053 config.go:182] Loaded profile config "functional-793215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1126 19:57:37.605422   32053 cli_runner.go:164] Run: docker container inspect functional-793215 --format={{.State.Status}}
I1126 19:57:37.631385   32053 ssh_runner.go:195] Run: systemctl --version
I1126 19:57:37.631446   32053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-793215
I1126 19:57:37.670325   32053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/functional-793215/id_rsa Username:docker}
I1126 19:57:37.784060   32053 build_images.go:162] Building image from path: /tmp/build.2341412296.tar
I1126 19:57:37.784136   32053 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1126 19:57:37.792516   32053 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2341412296.tar
I1126 19:57:37.796537   32053 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2341412296.tar: stat -c "%s %y" /var/lib/minikube/build/build.2341412296.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2341412296.tar': No such file or directory
I1126 19:57:37.796567   32053 ssh_runner.go:362] scp /tmp/build.2341412296.tar --> /var/lib/minikube/build/build.2341412296.tar (3072 bytes)
I1126 19:57:37.814481   32053 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2341412296
I1126 19:57:37.824125   32053 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2341412296 -xf /var/lib/minikube/build/build.2341412296.tar
I1126 19:57:37.833542   32053 crio.go:315] Building image: /var/lib/minikube/build/build.2341412296
I1126 19:57:37.833607   32053 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-793215 /var/lib/minikube/build/build.2341412296 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1126 19:57:41.215774   32053 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-793215 /var/lib/minikube/build/build.2341412296 --cgroup-manager=cgroupfs: (3.382145367s)
I1126 19:57:41.215843   32053 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2341412296
I1126 19:57:41.224269   32053 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2341412296.tar
I1126 19:57:41.232336   32053 build_images.go:218] Built localhost/my-image:functional-793215 from /tmp/build.2341412296.tar
I1126 19:57:41.232370   32053 build_images.go:134] succeeded building to: functional-793215
I1126 19:57:41.232380   32053 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-793215
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image rm kicbase/echo-server:functional-793215 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-793215 service list: (1.339279503s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-793215 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-793215 service list -o json: (1.361941712s)
functional_test.go:1504: Took "1.362035485s" to run "out/minikube-linux-arm64 -p functional-793215 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.36s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-793215
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-793215
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-793215
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1126 19:59:28.111800    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:00:51.176147    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m23.106322328s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (204.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 kubectl -- rollout status deployment/busybox: (4.337642701s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-6gwl4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-72bpv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-vwpd8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-6gwl4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-72bpv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-vwpd8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-6gwl4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-72bpv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-vwpd8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-6gwl4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-6gwl4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-72bpv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-72bpv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-vwpd8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 kubectl -- exec busybox-7b57f96db7-vwpd8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 node add --alsologtostderr -v 5
E1126 20:01:48.594197    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:01:48.600621    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:01:48.612033    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:01:48.633653    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:01:48.675017    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:01:48.756524    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:01:48.918043    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:01:49.239700    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:01:49.881143    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:01:51.163231    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:01:53.724750    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:01:58.846099    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:02:09.087650    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 node add --alsologtostderr -v 5: (58.165438386s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5: (1.06998379s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-278127 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.080237297s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 status --output json --alsologtostderr -v 5: (1.063721253s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp testdata/cp-test.txt ha-278127:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2837002730/001/cp-test_ha-278127.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127:/home/docker/cp-test.txt ha-278127-m02:/home/docker/cp-test_ha-278127_ha-278127-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m02 "sudo cat /home/docker/cp-test_ha-278127_ha-278127-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127:/home/docker/cp-test.txt ha-278127-m03:/home/docker/cp-test_ha-278127_ha-278127-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m03 "sudo cat /home/docker/cp-test_ha-278127_ha-278127-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127:/home/docker/cp-test.txt ha-278127-m04:/home/docker/cp-test_ha-278127_ha-278127-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m04 "sudo cat /home/docker/cp-test_ha-278127_ha-278127-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp testdata/cp-test.txt ha-278127-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2837002730/001/cp-test_ha-278127-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127-m02:/home/docker/cp-test.txt ha-278127:/home/docker/cp-test_ha-278127-m02_ha-278127.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127 "sudo cat /home/docker/cp-test_ha-278127-m02_ha-278127.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127-m02:/home/docker/cp-test.txt ha-278127-m03:/home/docker/cp-test_ha-278127-m02_ha-278127-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m03 "sudo cat /home/docker/cp-test_ha-278127-m02_ha-278127-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127-m02:/home/docker/cp-test.txt ha-278127-m04:/home/docker/cp-test_ha-278127-m02_ha-278127-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m04 "sudo cat /home/docker/cp-test_ha-278127-m02_ha-278127-m04.txt"
E1126 20:02:29.569874    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp testdata/cp-test.txt ha-278127-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2837002730/001/cp-test_ha-278127-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127-m03:/home/docker/cp-test.txt ha-278127:/home/docker/cp-test_ha-278127-m03_ha-278127.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127 "sudo cat /home/docker/cp-test_ha-278127-m03_ha-278127.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127-m03:/home/docker/cp-test.txt ha-278127-m02:/home/docker/cp-test_ha-278127-m03_ha-278127-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m02 "sudo cat /home/docker/cp-test_ha-278127-m03_ha-278127-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127-m03:/home/docker/cp-test.txt ha-278127-m04:/home/docker/cp-test_ha-278127-m03_ha-278127-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m04 "sudo cat /home/docker/cp-test_ha-278127-m03_ha-278127-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp testdata/cp-test.txt ha-278127-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2837002730/001/cp-test_ha-278127-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127:/home/docker/cp-test_ha-278127-m04_ha-278127.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127 "sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127-m02:/home/docker/cp-test_ha-278127-m04_ha-278127-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m02 "sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 cp ha-278127-m04:/home/docker/cp-test.txt ha-278127-m03:/home/docker/cp-test_ha-278127-m04_ha-278127-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 ssh -n ha-278127-m03 "sudo cat /home/docker/cp-test_ha-278127-m04_ha-278127-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 node stop m02 --alsologtostderr -v 5: (12.054470666s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5: exit status 7 (785.494743ms)

                                                
                                                
-- stdout --
	ha-278127
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-278127-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-278127-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-278127-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:02:51.056653   47769 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:02:51.056832   47769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:02:51.056844   47769 out.go:374] Setting ErrFile to fd 2...
	I1126 20:02:51.056850   47769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:02:51.057129   47769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:02:51.057318   47769 out.go:368] Setting JSON to false
	I1126 20:02:51.057352   47769 mustload.go:66] Loading cluster: ha-278127
	I1126 20:02:51.057480   47769 notify.go:221] Checking for updates...
	I1126 20:02:51.057771   47769 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:02:51.057789   47769 status.go:174] checking status of ha-278127 ...
	I1126 20:02:51.058671   47769 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:02:51.083225   47769 status.go:371] ha-278127 host status = "Running" (err=<nil>)
	I1126 20:02:51.083250   47769 host.go:66] Checking if "ha-278127" exists ...
	I1126 20:02:51.083601   47769 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127
	I1126 20:02:51.110091   47769 host.go:66] Checking if "ha-278127" exists ...
	I1126 20:02:51.110545   47769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:02:51.110638   47769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127
	I1126 20:02:51.128947   47769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127/id_rsa Username:docker}
	I1126 20:02:51.240488   47769 ssh_runner.go:195] Run: systemctl --version
	I1126 20:02:51.247113   47769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:02:51.262736   47769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:02:51.325892   47769 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-26 20:02:51.315368122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:02:51.326600   47769 kubeconfig.go:125] found "ha-278127" server: "https://192.168.49.254:8443"
	I1126 20:02:51.326638   47769 api_server.go:166] Checking apiserver status ...
	I1126 20:02:51.326690   47769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:02:51.339838   47769 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1252/cgroup
	I1126 20:02:51.348836   47769 api_server.go:182] apiserver freezer: "4:freezer:/docker/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/crio/crio-7d9ca98544e03f903996197982d8ad2eddb7eed4120f682d3850781f2b67399b"
	I1126 20:02:51.348908   47769 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0081e5a17ed52117b4c7a79337cf9bbf7bd3f15756d06fdbc0f411993351e8dd/crio/crio-7d9ca98544e03f903996197982d8ad2eddb7eed4120f682d3850781f2b67399b/freezer.state
	I1126 20:02:51.356671   47769 api_server.go:204] freezer state: "THAWED"
	I1126 20:02:51.356712   47769 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1126 20:02:51.365123   47769 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1126 20:02:51.365152   47769 status.go:463] ha-278127 apiserver status = Running (err=<nil>)
	I1126 20:02:51.365163   47769 status.go:176] ha-278127 status: &{Name:ha-278127 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:02:51.365193   47769 status.go:174] checking status of ha-278127-m02 ...
	I1126 20:02:51.365508   47769 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:02:51.383039   47769 status.go:371] ha-278127-m02 host status = "Stopped" (err=<nil>)
	I1126 20:02:51.383066   47769 status.go:384] host is not running, skipping remaining checks
	I1126 20:02:51.383073   47769 status.go:176] ha-278127-m02 status: &{Name:ha-278127-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:02:51.383105   47769 status.go:174] checking status of ha-278127-m03 ...
	I1126 20:02:51.383424   47769 cli_runner.go:164] Run: docker container inspect ha-278127-m03 --format={{.State.Status}}
	I1126 20:02:51.401589   47769 status.go:371] ha-278127-m03 host status = "Running" (err=<nil>)
	I1126 20:02:51.401613   47769 host.go:66] Checking if "ha-278127-m03" exists ...
	I1126 20:02:51.402034   47769 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m03
	I1126 20:02:51.427085   47769 host.go:66] Checking if "ha-278127-m03" exists ...
	I1126 20:02:51.427573   47769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:02:51.427656   47769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m03
	I1126 20:02:51.446130   47769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m03/id_rsa Username:docker}
	I1126 20:02:51.547395   47769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:02:51.560773   47769 kubeconfig.go:125] found "ha-278127" server: "https://192.168.49.254:8443"
	I1126 20:02:51.560805   47769 api_server.go:166] Checking apiserver status ...
	I1126 20:02:51.560856   47769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:02:51.573413   47769 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1199/cgroup
	I1126 20:02:51.583550   47769 api_server.go:182] apiserver freezer: "4:freezer:/docker/a2532e9fcb93e3127927ab36ca23c16809901a0cbf6af9d206139940737727c7/crio/crio-0111b27175dd275e27f6097f08432eae4d690912a6018162448af4351015a243"
	I1126 20:02:51.583636   47769 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a2532e9fcb93e3127927ab36ca23c16809901a0cbf6af9d206139940737727c7/crio/crio-0111b27175dd275e27f6097f08432eae4d690912a6018162448af4351015a243/freezer.state
	I1126 20:02:51.591623   47769 api_server.go:204] freezer state: "THAWED"
	I1126 20:02:51.591649   47769 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1126 20:02:51.599754   47769 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1126 20:02:51.599783   47769 status.go:463] ha-278127-m03 apiserver status = Running (err=<nil>)
	I1126 20:02:51.599792   47769 status.go:176] ha-278127-m03 status: &{Name:ha-278127-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:02:51.599839   47769 status.go:174] checking status of ha-278127-m04 ...
	I1126 20:02:51.600173   47769 cli_runner.go:164] Run: docker container inspect ha-278127-m04 --format={{.State.Status}}
	I1126 20:02:51.617840   47769 status.go:371] ha-278127-m04 host status = "Running" (err=<nil>)
	I1126 20:02:51.617871   47769 host.go:66] Checking if "ha-278127-m04" exists ...
	I1126 20:02:51.618206   47769 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-278127-m04
	I1126 20:02:51.637108   47769 host.go:66] Checking if "ha-278127-m04" exists ...
	I1126 20:02:51.637426   47769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:02:51.637471   47769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-278127-m04
	I1126 20:02:51.655516   47769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/ha-278127-m04/id_rsa Username:docker}
	I1126 20:02:51.767100   47769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:02:51.782374   47769 status.go:176] ha-278127-m04 status: &{Name:ha-278127-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 node start m02 --alsologtostderr -v 5
E1126 20:03:10.531182    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 node start m02 --alsologtostderr -v 5: (32.577825049s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5: (1.211933942s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.237406312s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (128.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 stop --alsologtostderr -v 5: (37.555710127s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 start --wait true --alsologtostderr -v 5
E1126 20:04:28.112459    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:04:32.454071    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 start --wait true --alsologtostderr -v 5: (1m30.559069896s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (128.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 node delete m03 --alsologtostderr -v 5: (10.869903819s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-278127 stop --alsologtostderr -v 5: (36.019007719s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-278127 status --alsologtostderr -v 5: exit status 7 (117.03103ms)

                                                
                                                
-- stdout --
	ha-278127
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-278127-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-278127-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:06:24.733862   59933 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:06:24.734075   59933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:06:24.734106   59933 out.go:374] Setting ErrFile to fd 2...
	I1126 20:06:24.734127   59933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:06:24.734422   59933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:06:24.734655   59933 out.go:368] Setting JSON to false
	I1126 20:06:24.734722   59933 mustload.go:66] Loading cluster: ha-278127
	I1126 20:06:24.734798   59933 notify.go:221] Checking for updates...
	I1126 20:06:24.735233   59933 config.go:182] Loaded profile config "ha-278127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:06:24.735272   59933 status.go:174] checking status of ha-278127 ...
	I1126 20:06:24.735846   59933 cli_runner.go:164] Run: docker container inspect ha-278127 --format={{.State.Status}}
	I1126 20:06:24.755281   59933 status.go:371] ha-278127 host status = "Stopped" (err=<nil>)
	I1126 20:06:24.755303   59933 status.go:384] host is not running, skipping remaining checks
	I1126 20:06:24.755310   59933 status.go:176] ha-278127 status: &{Name:ha-278127 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:06:24.755341   59933 status.go:174] checking status of ha-278127-m02 ...
	I1126 20:06:24.755646   59933 cli_runner.go:164] Run: docker container inspect ha-278127-m02 --format={{.State.Status}}
	I1126 20:06:24.783554   59933 status.go:371] ha-278127-m02 host status = "Stopped" (err=<nil>)
	I1126 20:06:24.783574   59933 status.go:384] host is not running, skipping remaining checks
	I1126 20:06:24.783607   59933 status.go:176] ha-278127-m02 status: &{Name:ha-278127-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:06:24.783639   59933 status.go:174] checking status of ha-278127-m04 ...
	I1126 20:06:24.783920   59933 cli_runner.go:164] Run: docker container inspect ha-278127-m04 --format={{.State.Status}}
	I1126 20:06:24.800974   59933 status.go:371] ha-278127-m04 host status = "Stopped" (err=<nil>)
	I1126 20:06:24.801022   59933 status.go:384] host is not running, skipping remaining checks
	I1126 20:06:24.801030   59933 status.go:176] ha-278127-m04 status: &{Name:ha-278127-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.14s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-053036 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1126 20:16:48.594141    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:17:31.177997    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-053036 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.615649239s)
--- PASS: TestJSONOutput/start/Command (82.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-053036 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-053036 --output=json --user=testUser: (5.810327567s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-737469 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-737469 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (101.701503ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"10222031-48b1-401d-93ed-6c8b6d355739","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-737469] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5fe7b83c-5ae9-43ac-a4d0-84a2d58ab87a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21974"}}
	{"specversion":"1.0","id":"dfbd0c43-9e66-4c47-882f-eb32425231a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6d5c44ec-3a77-4992-89d9-8edd0688434f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig"}}
	{"specversion":"1.0","id":"4cb5537d-22b9-4d86-a250-ed94bbcdb003","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube"}}
	{"specversion":"1.0","id":"5b24e736-f9ef-4204-b17a-464fabced162","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"658c2914-a264-41d4-bbe2-70039bfda075","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a6f1bfc6-9a86-4d8d-a2e1-e273b8a6df74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-737469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-737469
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-364513 --network=
E1126 20:18:11.663719    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-364513 --network=: (42.655342822s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-364513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-364513
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-364513: (2.205207857s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.88s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.4s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-785131 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-785131 --network=bridge: (32.237661994s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-785131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-785131
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-785131: (2.135943166s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.40s)

                                                
                                    
x
+
TestKicExistingNetwork (34.98s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1126 20:19:13.259656    4129 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1126 20:19:13.274096    4129 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1126 20:19:13.274952    4129 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1126 20:19:13.274984    4129 cli_runner.go:164] Run: docker network inspect existing-network
W1126 20:19:13.290796    4129 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1126 20:19:13.290824    4129 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1126 20:19:13.290840    4129 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1126 20:19:13.290949    4129 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1126 20:19:13.308935    4129 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-20cb65a83ad5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d6:26:47:2b:2e:03} reservation:<nil>}
I1126 20:19:13.309262    4129 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ad9620}
I1126 20:19:13.309286    4129 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1126 20:19:13.309332    4129 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1126 20:19:13.379218    4129 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-248845 --network=existing-network
E1126 20:19:28.118211    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-248845 --network=existing-network: (32.682043359s)
helpers_test.go:175: Cleaning up "existing-network-248845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-248845
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-248845: (2.144772623s)
I1126 20:19:48.222074    4129 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.98s)

                                                
                                    
x
+
TestKicCustomSubnet (39.17s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-896822 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-896822 --subnet=192.168.60.0/24: (36.976088506s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-896822 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-896822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-896822
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-896822: (2.162636902s)
--- PASS: TestKicCustomSubnet (39.17s)

                                                
                                    
x
+
TestKicStaticIP (36.73s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-348602 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-348602 --static-ip=192.168.200.200: (34.306781901s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-348602 ip
helpers_test.go:175: Cleaning up "static-ip-348602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-348602
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-348602: (2.262038756s)
--- PASS: TestKicStaticIP (36.73s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.64s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-059141 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-059141 --driver=docker  --container-runtime=crio: (31.683304236s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-061480 --driver=docker  --container-runtime=crio
E1126 20:21:48.593677    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-061480 --driver=docker  --container-runtime=crio: (33.971535262s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-059141
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-061480
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-061480" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-061480
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-061480: (2.129576967s)
helpers_test.go:175: Cleaning up "first-059141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-059141
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-059141: (2.409866578s)
--- PASS: TestMinikubeProfile (71.64s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-154072 --memory=3072 --mount-string /tmp/TestMountStartserial61663932/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-154072 --memory=3072 --mount-string /tmp/TestMountStartserial61663932/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.537627503s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-154072 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.85s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-155935 --memory=3072 --mount-string /tmp/TestMountStartserial61663932/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-155935 --memory=3072 --mount-string /tmp/TestMountStartserial61663932/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.853431773s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-155935 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-154072 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-154072 --alsologtostderr -v=5: (1.721004812s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-155935 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-155935
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-155935: (1.309988626s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.78s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-155935
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-155935: (6.78161686s)
--- PASS: TestMountStart/serial/RestartStopped (7.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-155935 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-226784 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1126 20:24:28.112428    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-226784 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m15.861031034s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-226784 -- rollout status deployment/busybox: (3.196463872s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- exec busybox-7b57f96db7-58qrd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- exec busybox-7b57f96db7-ggqx7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- exec busybox-7b57f96db7-58qrd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- exec busybox-7b57f96db7-ggqx7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- exec busybox-7b57f96db7-58qrd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- exec busybox-7b57f96db7-ggqx7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- exec busybox-7b57f96db7-58qrd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- exec busybox-7b57f96db7-58qrd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- exec busybox-7b57f96db7-ggqx7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-226784 -- exec busybox-7b57f96db7-ggqx7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-226784 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-226784 -v=5 --alsologtostderr: (57.298761752s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.02s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-226784 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 cp testdata/cp-test.txt multinode-226784:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 cp multinode-226784:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3551117308/001/cp-test_multinode-226784.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 cp multinode-226784:/home/docker/cp-test.txt multinode-226784-m02:/home/docker/cp-test_multinode-226784_multinode-226784-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784-m02 "sudo cat /home/docker/cp-test_multinode-226784_multinode-226784-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 cp multinode-226784:/home/docker/cp-test.txt multinode-226784-m03:/home/docker/cp-test_multinode-226784_multinode-226784-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784-m03 "sudo cat /home/docker/cp-test_multinode-226784_multinode-226784-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 cp testdata/cp-test.txt multinode-226784-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 cp multinode-226784-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3551117308/001/cp-test_multinode-226784-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 cp multinode-226784-m02:/home/docker/cp-test.txt multinode-226784:/home/docker/cp-test_multinode-226784-m02_multinode-226784.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784 "sudo cat /home/docker/cp-test_multinode-226784-m02_multinode-226784.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 cp multinode-226784-m02:/home/docker/cp-test.txt multinode-226784-m03:/home/docker/cp-test_multinode-226784-m02_multinode-226784-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784-m03 "sudo cat /home/docker/cp-test_multinode-226784-m02_multinode-226784-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 cp testdata/cp-test.txt multinode-226784-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 cp multinode-226784-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3551117308/001/cp-test_multinode-226784-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 cp multinode-226784-m03:/home/docker/cp-test.txt multinode-226784:/home/docker/cp-test_multinode-226784-m03_multinode-226784.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784 "sudo cat /home/docker/cp-test_multinode-226784-m03_multinode-226784.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 cp multinode-226784-m03:/home/docker/cp-test.txt multinode-226784-m02:/home/docker/cp-test_multinode-226784-m03_multinode-226784-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 ssh -n multinode-226784-m02 "sudo cat /home/docker/cp-test_multinode-226784-m03_multinode-226784-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-226784 node stop m03: (1.333990601s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-226784 status: exit status 7 (562.792669ms)

                                                
                                                
-- stdout --
	multinode-226784
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-226784-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-226784-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-226784 status --alsologtostderr: exit status 7 (539.923616ms)

                                                
                                                
-- stdout --
	multinode-226784
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-226784-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-226784-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:26:20.350537  123086 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:26:20.350767  123086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:26:20.350815  123086 out.go:374] Setting ErrFile to fd 2...
	I1126 20:26:20.350837  123086 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:26:20.351168  123086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:26:20.351619  123086 out.go:368] Setting JSON to false
	I1126 20:26:20.351687  123086 mustload.go:66] Loading cluster: multinode-226784
	I1126 20:26:20.351788  123086 notify.go:221] Checking for updates...
	I1126 20:26:20.352137  123086 config.go:182] Loaded profile config "multinode-226784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:26:20.352174  123086 status.go:174] checking status of multinode-226784 ...
	I1126 20:26:20.353022  123086 cli_runner.go:164] Run: docker container inspect multinode-226784 --format={{.State.Status}}
	I1126 20:26:20.372766  123086 status.go:371] multinode-226784 host status = "Running" (err=<nil>)
	I1126 20:26:20.372793  123086 host.go:66] Checking if "multinode-226784" exists ...
	I1126 20:26:20.373094  123086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-226784
	I1126 20:26:20.405318  123086 host.go:66] Checking if "multinode-226784" exists ...
	I1126 20:26:20.405625  123086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:26:20.405678  123086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-226784
	I1126 20:26:20.425115  123086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/multinode-226784/id_rsa Username:docker}
	I1126 20:26:20.527204  123086 ssh_runner.go:195] Run: systemctl --version
	I1126 20:26:20.533342  123086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:26:20.545709  123086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:26:20.611319  123086 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-26 20:26:20.601983689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:26:20.611878  123086 kubeconfig.go:125] found "multinode-226784" server: "https://192.168.67.2:8443"
	I1126 20:26:20.611911  123086 api_server.go:166] Checking apiserver status ...
	I1126 20:26:20.611961  123086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1126 20:26:20.623126  123086 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup
	I1126 20:26:20.632750  123086 api_server.go:182] apiserver freezer: "4:freezer:/docker/4056ab504ab6a3ceb3ca39c24d0253ed849c7f2287c02d7132d97aea7216dbee/crio/crio-d2d800ddcf8f556e673b1dab1c5f143dcae7ef39b82eddf998a20d831e2a80ee"
	I1126 20:26:20.632823  123086 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4056ab504ab6a3ceb3ca39c24d0253ed849c7f2287c02d7132d97aea7216dbee/crio/crio-d2d800ddcf8f556e673b1dab1c5f143dcae7ef39b82eddf998a20d831e2a80ee/freezer.state
	I1126 20:26:20.641003  123086 api_server.go:204] freezer state: "THAWED"
	I1126 20:26:20.641033  123086 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1126 20:26:20.649161  123086 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1126 20:26:20.649189  123086 status.go:463] multinode-226784 apiserver status = Running (err=<nil>)
	I1126 20:26:20.649200  123086 status.go:176] multinode-226784 status: &{Name:multinode-226784 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:26:20.649215  123086 status.go:174] checking status of multinode-226784-m02 ...
	I1126 20:26:20.649515  123086 cli_runner.go:164] Run: docker container inspect multinode-226784-m02 --format={{.State.Status}}
	I1126 20:26:20.670345  123086 status.go:371] multinode-226784-m02 host status = "Running" (err=<nil>)
	I1126 20:26:20.670373  123086 host.go:66] Checking if "multinode-226784-m02" exists ...
	I1126 20:26:20.670670  123086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-226784-m02
	I1126 20:26:20.687327  123086 host.go:66] Checking if "multinode-226784-m02" exists ...
	I1126 20:26:20.687634  123086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1126 20:26:20.687684  123086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-226784-m02
	I1126 20:26:20.704258  123086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21974-2326/.minikube/machines/multinode-226784-m02/id_rsa Username:docker}
	I1126 20:26:20.802893  123086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1126 20:26:20.815530  123086 status.go:176] multinode-226784-m02 status: &{Name:multinode-226784-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:26:20.815565  123086 status.go:174] checking status of multinode-226784-m03 ...
	I1126 20:26:20.815877  123086 cli_runner.go:164] Run: docker container inspect multinode-226784-m03 --format={{.State.Status}}
	I1126 20:26:20.839845  123086 status.go:371] multinode-226784-m03 host status = "Stopped" (err=<nil>)
	I1126 20:26:20.839872  123086 status.go:384] host is not running, skipping remaining checks
	I1126 20:26:20.839879  123086 status.go:176] multinode-226784-m03 status: &{Name:multinode-226784-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-226784 node start m03 -v=5 --alsologtostderr: (7.191123921s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-226784
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-226784
E1126 20:26:48.594056    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-226784: (25.161326441s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-226784 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-226784 --wait=true -v=5 --alsologtostderr: (57.281748579s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-226784
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-226784 node delete m03: (4.935715416s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-226784 stop: (23.752944192s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-226784 status: exit status 7 (85.760417ms)

                                                
                                                
-- stdout --
	multinode-226784
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-226784-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-226784 status --alsologtostderr: exit status 7 (91.483708ms)

                                                
                                                
-- stdout --
	multinode-226784
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-226784-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:28:20.908294  130887 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:28:20.908400  130887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:28:20.908411  130887 out.go:374] Setting ErrFile to fd 2...
	I1126 20:28:20.908415  130887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:28:20.908653  130887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:28:20.908818  130887 out.go:368] Setting JSON to false
	I1126 20:28:20.908853  130887 mustload.go:66] Loading cluster: multinode-226784
	I1126 20:28:20.908925  130887 notify.go:221] Checking for updates...
	I1126 20:28:20.909767  130887 config.go:182] Loaded profile config "multinode-226784": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:28:20.909788  130887 status.go:174] checking status of multinode-226784 ...
	I1126 20:28:20.910373  130887 cli_runner.go:164] Run: docker container inspect multinode-226784 --format={{.State.Status}}
	I1126 20:28:20.927379  130887 status.go:371] multinode-226784 host status = "Stopped" (err=<nil>)
	I1126 20:28:20.927399  130887 status.go:384] host is not running, skipping remaining checks
	I1126 20:28:20.927406  130887 status.go:176] multinode-226784 status: &{Name:multinode-226784 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1126 20:28:20.927435  130887 status.go:174] checking status of multinode-226784-m02 ...
	I1126 20:28:20.927735  130887 cli_runner.go:164] Run: docker container inspect multinode-226784-m02 --format={{.State.Status}}
	I1126 20:28:20.953002  130887 status.go:371] multinode-226784-m02 host status = "Stopped" (err=<nil>)
	I1126 20:28:20.953024  130887 status.go:384] host is not running, skipping remaining checks
	I1126 20:28:20.953040  130887 status.go:176] multinode-226784-m02 status: &{Name:multinode-226784-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-226784 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-226784 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.283219666s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-226784 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.99s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-226784
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-226784-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-226784-m02 --driver=docker  --container-runtime=crio: exit status 14 (92.573422ms)

                                                
                                                
-- stdout --
	* [multinode-226784-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-226784-m02' is duplicated with machine name 'multinode-226784-m02' in profile 'multinode-226784'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-226784-m03 --driver=docker  --container-runtime=crio
E1126 20:29:28.117335    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-226784-m03 --driver=docker  --container-runtime=crio: (34.76309618s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-226784
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-226784: exit status 80 (343.520941ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-226784 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-226784-m03 already exists in multinode-226784-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-226784-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-226784-m03: (2.182574449s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.43s)

                                                
                                    
x
+
TestPreload (124.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-244513 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-244513 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m0.829516018s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-244513 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-244513 image pull gcr.io/k8s-minikube/busybox: (2.127031483s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-244513
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-244513: (5.855419685s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-244513 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1126 20:31:48.594146    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-244513 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (53.189503515s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-244513 image list
helpers_test.go:175: Cleaning up "test-preload-244513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-244513
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-244513: (2.425022049s)
--- PASS: TestPreload (124.66s)

                                                
                                    
x
+
TestScheduledStopUnix (109.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-063984 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-063984 --memory=3072 --driver=docker  --container-runtime=crio: (33.369116936s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-063984 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1126 20:32:27.761429  144851 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:32:27.761585  144851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:32:27.761618  144851 out.go:374] Setting ErrFile to fd 2...
	I1126 20:32:27.761631  144851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:32:27.763674  144851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:32:27.763999  144851 out.go:368] Setting JSON to false
	I1126 20:32:27.764192  144851 mustload.go:66] Loading cluster: scheduled-stop-063984
	I1126 20:32:27.764578  144851 config.go:182] Loaded profile config "scheduled-stop-063984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:32:27.764682  144851 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/config.json ...
	I1126 20:32:27.764892  144851 mustload.go:66] Loading cluster: scheduled-stop-063984
	I1126 20:32:27.765042  144851 config.go:182] Loaded profile config "scheduled-stop-063984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-063984 -n scheduled-stop-063984
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-063984 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1126 20:32:28.242828  144941 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:32:28.243072  144941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:32:28.243089  144941 out.go:374] Setting ErrFile to fd 2...
	I1126 20:32:28.243095  144941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:32:28.243433  144941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:32:28.243758  144941 out.go:368] Setting JSON to false
	I1126 20:32:28.244995  144941 daemonize_unix.go:73] killing process 144868 as it is an old scheduled stop
	I1126 20:32:28.245148  144941 mustload.go:66] Loading cluster: scheduled-stop-063984
	I1126 20:32:28.245562  144941 config.go:182] Loaded profile config "scheduled-stop-063984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:32:28.245638  144941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/config.json ...
	I1126 20:32:28.245830  144941 mustload.go:66] Loading cluster: scheduled-stop-063984
	I1126 20:32:28.245977  144941 config.go:182] Loaded profile config "scheduled-stop-063984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1126 20:32:28.251719    4129 retry.go:31] will retry after 111.713µs: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.252389    4129 retry.go:31] will retry after 182.207µs: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.252840    4129 retry.go:31] will retry after 249.919µs: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.254011    4129 retry.go:31] will retry after 277.442µs: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.255122    4129 retry.go:31] will retry after 441.272µs: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.256233    4129 retry.go:31] will retry after 963.366µs: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.257341    4129 retry.go:31] will retry after 1.294694ms: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.259533    4129 retry.go:31] will retry after 949.698µs: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.260645    4129 retry.go:31] will retry after 2.531845ms: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.263870    4129 retry.go:31] will retry after 5.25192ms: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.270170    4129 retry.go:31] will retry after 4.345726ms: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.275458    4129 retry.go:31] will retry after 11.360041ms: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.287971    4129 retry.go:31] will retry after 13.894893ms: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.302192    4129 retry.go:31] will retry after 19.09745ms: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.322421    4129 retry.go:31] will retry after 17.57137ms: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
I1126 20:32:28.340655    4129 retry.go:31] will retry after 25.2708ms: open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-063984 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-063984 -n scheduled-stop-063984
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-063984
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-063984 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1126 20:32:54.172033  145300 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:32:54.172147  145300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:32:54.172158  145300 out.go:374] Setting ErrFile to fd 2...
	I1126 20:32:54.172164  145300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:32:54.172398  145300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:32:54.172644  145300 out.go:368] Setting JSON to false
	I1126 20:32:54.172742  145300 mustload.go:66] Loading cluster: scheduled-stop-063984
	I1126 20:32:54.173119  145300 config.go:182] Loaded profile config "scheduled-stop-063984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:32:54.173193  145300 profile.go:143] Saving config to /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/scheduled-stop-063984/config.json ...
	I1126 20:32:54.173374  145300 mustload.go:66] Loading cluster: scheduled-stop-063984
	I1126 20:32:54.173486  145300 config.go:182] Loaded profile config "scheduled-stop-063984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-063984
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-063984: exit status 7 (68.051125ms)

                                                
                                                
-- stdout --
	scheduled-stop-063984
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-063984 -n scheduled-stop-063984
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-063984 -n scheduled-stop-063984: exit status 7 (67.6402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-063984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-063984
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-063984: (4.884993396s)
--- PASS: TestScheduledStopUnix (109.86s)

                                                
                                    
x
+
TestInsufficientStorage (12.96s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-884499 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-884499 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.417401587s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"813d00e6-a813-448f-995f-24330d237ce9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-884499] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"875a4108-9353-42de-8d9a-d31e3612a717","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21974"}}
	{"specversion":"1.0","id":"275daa48-9c33-482e-b326-f97fd05e7e2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eba6b2df-c6fd-48d8-8995-1df02c013a34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig"}}
	{"specversion":"1.0","id":"d63cdff3-1ac1-465e-ba82-db6575d7fda5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube"}}
	{"specversion":"1.0","id":"3972a026-7503-415f-a135-3b28eff74d38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e8492dee-3a2e-4228-8878-02495ce81308","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"07176034-9ced-4b88-806f-1f5e6392ad2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"bdf43e10-8675-4cba-b2a5-99e07ae50dcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"70d57954-3b3d-401b-b687-12ee16b66638","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"459fada3-208b-43d3-b614-5fa89316e509","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4af3f11e-cd77-495d-8d95-47bfc0550a1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-884499\" primary control-plane node in \"insufficient-storage-884499\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d244b11f-d3b5-4ef5-8ad1-1856cba71446","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3762b297-6769-462e-8247-ff63f54b9708","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"890c73fb-aad8-42f2-83b5-d144245a4c83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-884499 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-884499 --output=json --layout=cluster: exit status 7 (300.252316ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-884499","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-884499","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1126 20:33:54.900574  147014 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-884499" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-884499 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-884499 --output=json --layout=cluster: exit status 7 (304.769765ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-884499","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-884499","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1126 20:33:55.206744  147082 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-884499" does not appear in /home/jenkins/minikube-integration/21974-2326/kubeconfig
	E1126 20:33:55.216874  147082 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/insufficient-storage-884499/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-884499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-884499
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-884499: (1.934815736s)
--- PASS: TestInsufficientStorage (12.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (316.62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3728462418 start -p running-upgrade-215687 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3728462418 start -p running-upgrade-215687 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.222158735s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-215687 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1126 20:39:28.111606    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-215687 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.497997658s)
helpers_test.go:175: Cleaning up "running-upgrade-215687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-215687
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-215687: (2.127157338s)
--- PASS: TestRunningBinaryUpgrade (316.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (193.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.831673786s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-007998
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-007998: (2.434457222s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-007998 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-007998 status --format={{.Host}}: exit status 7 (187.69084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m58.353775577s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-007998 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (90.28604ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-007998] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-007998
	    minikube start -p kubernetes-upgrade-007998 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0079982 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-007998 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-007998 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.960456353s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-007998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-007998
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-007998: (2.502464319s)
--- PASS: TestKubernetesUpgrade (193.46s)

                                                
                                    
x
+
TestMissingContainerUpgrade (117.38s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2976928761 start -p missing-upgrade-701119 --memory=3072 --driver=docker  --container-runtime=crio
E1126 20:34:11.179398    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:34:28.111692    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2976928761 start -p missing-upgrade-701119 --memory=3072 --driver=docker  --container-runtime=crio: (1m1.873762861s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-701119
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-701119: (1.044399881s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-701119
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-701119 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-701119 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.829123268s)
helpers_test.go:175: Cleaning up "missing-upgrade-701119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-701119
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-701119: (3.571045533s)
--- PASS: TestMissingContainerUpgrade (117.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-784576 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-784576 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (103.07986ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-784576] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-784576 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-784576 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.912298307s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-784576 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-784576 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1126 20:34:51.666033    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-784576 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (15.434482452s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-784576 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-784576 status -o json: exit status 2 (362.203048ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-784576","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-784576
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-784576: (2.301788758s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-784576 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-784576 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.749964518s)
--- PASS: TestNoKubernetes/serial/Start (8.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21974-2326/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-784576 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-784576 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.471012ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-784576
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-784576: (1.284397014s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-784576 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-784576 --driver=docker  --container-runtime=crio: (7.527914534s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-784576 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-784576 "sudo systemctl is-active --quiet service kubelet": exit status 1 (303.569466ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (313.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.4155643229 start -p stopped-upgrade-569097 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.4155643229 start -p stopped-upgrade-569097 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.572296018s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.4155643229 -p stopped-upgrade-569097 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.4155643229 -p stopped-upgrade-569097 stop: (1.275207621s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-569097 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1126 20:36:48.593314    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-569097 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.508454999s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (313.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-569097
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-569097: (1.325148111s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.33s)

                                                
                                    
x
+
TestPause/serial/Start (84.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-166757 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1126 20:41:48.593762    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-166757 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.948290273s)
--- PASS: TestPause/serial/Start (84.95s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.52s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-166757 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-166757 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.50030724s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-235709 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-235709 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (250.421386ms)

                                                
                                                
-- stdout --
	* [false-235709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21974
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1126 20:43:54.977328  192123 out.go:360] Setting OutFile to fd 1 ...
	I1126 20:43:54.977546  192123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:43:54.977572  192123 out.go:374] Setting ErrFile to fd 2...
	I1126 20:43:54.977591  192123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1126 20:43:54.977874  192123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21974-2326/.minikube/bin
	I1126 20:43:54.978309  192123 out.go:368] Setting JSON to false
	I1126 20:43:54.979231  192123 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5165,"bootTime":1764184670,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1126 20:43:54.979317  192123 start.go:143] virtualization:  
	I1126 20:43:54.983087  192123 out.go:179] * [false-235709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1126 20:43:54.986172  192123 out.go:179]   - MINIKUBE_LOCATION=21974
	I1126 20:43:54.986257  192123 notify.go:221] Checking for updates...
	I1126 20:43:54.990660  192123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1126 20:43:54.993638  192123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21974-2326/kubeconfig
	I1126 20:43:54.996448  192123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21974-2326/.minikube
	I1126 20:43:54.999452  192123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1126 20:43:55.002262  192123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1126 20:43:55.005611  192123 config.go:182] Loaded profile config "force-systemd-flag-622960": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1126 20:43:55.005705  192123 driver.go:422] Setting default libvirt URI to qemu:///system
	I1126 20:43:55.043484  192123 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1126 20:43:55.043612  192123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1126 20:43:55.147275  192123 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-26 20:43:55.137375888 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1126 20:43:55.147387  192123 docker.go:319] overlay module found
	I1126 20:43:55.152175  192123 out.go:179] * Using the docker driver based on user configuration
	I1126 20:43:55.155629  192123 start.go:309] selected driver: docker
	I1126 20:43:55.155651  192123 start.go:927] validating driver "docker" against <nil>
	I1126 20:43:55.155665  192123 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1126 20:43:55.159248  192123 out.go:203] 
	W1126 20:43:55.162221  192123 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1126 20:43:55.165127  192123 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-235709 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-235709

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-235709

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-235709

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-235709

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-235709

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-235709

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-235709

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-235709

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-235709

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-235709

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-235709

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-235709" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-235709" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-235709

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235709"

                                                
                                                
----------------------- debugLogs end: false-235709 [took: 5.234414688s] --------------------------------
helpers_test.go:175: Cleaning up "false-235709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-235709
--- PASS: TestNetworkPlugins/group/false (5.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.244222572s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-264537 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [08c368e4-7be3-4bc3-bde6-222d7bd7f0c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [08c368e4-7be3-4bc3-bde6-222d7bd7f0c1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00362827s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-264537 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-264537 --alsologtostderr -v=3
E1126 20:46:48.594200    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-264537 --alsologtostderr -v=3: (12.007398579s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264537 -n old-k8s-version-264537
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264537 -n old-k8s-version-264537: exit status 7 (74.74531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-264537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-264537 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.687271843s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-264537 -n old-k8s-version-264537
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-zpz9j" [88b5eb99-bcb6-4aae-b2a8-afb053c2093c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003378035s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-zpz9j" [88b5eb99-bcb6-4aae-b2a8-afb053c2093c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.076260002s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-264537 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-264537 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m1.711654202s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-956694 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b82900f4-b9ca-4f50-9ac6-95bb86374236] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b82900f4-b9ca-4f50-9ac6-95bb86374236] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004078244s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-956694 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-956694 --alsologtostderr -v=3
E1126 20:49:28.115141    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-956694 --alsologtostderr -v=3: (11.994373461s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-956694 -n no-preload-956694
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-956694 -n no-preload-956694: exit status 7 (67.532986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-956694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (59.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-956694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (59.031354426s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-956694 -n no-preload-956694
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (59.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m22.462414945s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f79rr" [b7202bf1-6dc0-4055-9c82-0fa5b068db9a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00385672s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f79rr" [b7202bf1-6dc0-4055-9c82-0fa5b068db9a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.039809168s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-956694 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-956694 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m22.261422533s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-616586 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [350da55d-5536-49f4-9d13-9fdd1bb3c7de] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [350da55d-5536-49f4-9d13-9fdd1bb3c7de] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004882725s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-616586 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-616586 --alsologtostderr -v=3
E1126 20:51:31.667539    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:51:32.300000    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:51:32.306536    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:51:32.317857    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:51:32.339204    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:51:32.380623    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:51:32.462339    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:51:32.623877    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:51:32.945789    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:51:33.587332    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-616586 --alsologtostderr -v=3: (12.625091906s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-616586 -n embed-certs-616586
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-616586 -n embed-certs-616586: exit status 7 (96.962967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-616586 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1126 20:51:34.870659    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1126 20:51:37.433485    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:51:42.555732    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:51:48.593626    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:51:52.797502    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:52:13.279470    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-616586 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.047339368s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-616586 -n embed-certs-616586
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-538119 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e45ab641-595b-4250-9bca-f10dee6cbe16] Pending
helpers_test.go:352: "busybox" [e45ab641-595b-4250-9bca-f10dee6cbe16] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e45ab641-595b-4250-9bca-f10dee6cbe16] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003713761s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-538119 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6hlql" [11e8fba4-bcc7-4952-a344-fcd4f0f6240a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003421477s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-538119 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-538119 --alsologtostderr -v=3: (12.080269398s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6hlql" [11e8fba4-bcc7-4952-a344-fcd4f0f6240a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003658552s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-616586 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-616586 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119: exit status 7 (91.125448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-538119 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-538119 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.216567985s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-538119 -n default-k8s-diff-port-538119
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1126 20:52:54.241185    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.652701619s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-583801 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-583801 --alsologtostderr -v=3: (1.420582654s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rktgh" [975abbcd-6e87-4996-aeef-10e9c652170b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003858028s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-583801 -n newest-cni-583801
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-583801 -n newest-cni-583801: exit status 7 (76.466239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-583801 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-583801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (17.95497432s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-583801 -n newest-cni-583801
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rktgh" [975abbcd-6e87-4996-aeef-10e9c652170b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004540506s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-538119 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-538119 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-583801 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.59059669s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1126 20:54:12.138279    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:12.144618    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:12.155967    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:12.177311    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:12.218659    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:12.300179    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:12.461549    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:12.783622    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:13.425150    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:14.706955    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:16.163449    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:17.269079    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:22.390444    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:28.112047    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/addons-152801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:32.632415    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:54:53.114624    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m5.34424314s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-gb6lf" [94ae20dc-abac-40f9-a2d1-d7af2d9cbbad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003348984s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-235709 "pgrep -a kubelet"
I1126 20:55:14.743510    4129 config.go:182] Loaded profile config "flannel-235709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-235709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dgp77" [1d0ab28a-840f-4a28-a0e6-47eafdfad23f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dgp77" [1d0ab28a-840f-4a28-a0e6-47eafdfad23f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004286103s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-235709 "pgrep -a kubelet"
I1126 20:55:21.624502    4129 config.go:182] Loaded profile config "auto-235709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-235709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c7zvq" [ea78b774-4afe-4d90-9a49-a994642fea25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c7zvq" [ea78b774-4afe-4d90-9a49-a994642fea25] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00429614s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-235709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-235709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m23.194965338s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1126 20:56:32.300325    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:56:48.595173    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/functional-793215/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:56:55.998073    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:57:00.004812    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/old-k8s-version-264537/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m10.420759813s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-235709 "pgrep -a kubelet"
I1126 20:57:08.338104    4129 config.go:182] Loaded profile config "custom-flannel-235709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-235709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bpndg" [2e51b591-7ba6-4729-9b22-cea48e01aaab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bpndg" [2e51b591-7ba6-4729-9b22-cea48e01aaab] Running
E1126 20:57:17.652556    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:57:17.658932    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:57:17.670354    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:57:17.691685    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:57:17.733609    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:57:17.815039    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:57:17.976439    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:57:18.298287    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004709752s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-tcxt2" [19a055e1-61db-4ae2-8b4c-b570cd7d5581] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00327103s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-235709 "pgrep -a kubelet"
I1126 20:57:18.684447    4129 config.go:182] Loaded profile config "calico-235709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-235709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jm4w5" [dacc8232-ee43-457f-949e-e907b3b69dc8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jm4w5" [dacc8232-ee43-457f-949e-e907b3b69dc8] Running
E1126 20:57:27.905440    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.008245145s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-235709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1126 20:57:18.940411    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-235709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m28.783417259s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1126 20:57:58.630938    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:58:39.593040    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/default-k8s-diff-port-538119/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1126 20:59:12.138817    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/no-preload-956694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m18.647199804s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-gtcvp" [b07d3ae5-a1ba-4c51-bfe8-c76c89c069e8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003696993s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-235709 "pgrep -a kubelet"
I1126 20:59:14.471495    4129 config.go:182] Loaded profile config "bridge-235709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-235709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4hgjw" [17603375-7478-4666-a67d-fc48c428d0d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4hgjw" [17603375-7478-4666-a67d-fc48c428d0d6] Running
I1126 20:59:18.761563    4129 config.go:182] Loaded profile config "kindnet-235709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003033467s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-235709 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-235709 replace --force -f testdata/netcat-deployment.yaml
I1126 20:59:19.053063    4129 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6hfqq" [31d23f2b-5c54-4580-9819-9c5b90344383] Pending
helpers_test.go:352: "netcat-cd4db9dbf-6hfqq" [31d23f2b-5c54-4580-9819-9c5b90344383] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6hfqq" [31d23f2b-5c54-4580-9819-9c5b90344383] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.0044076s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-235709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-235709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (52.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-235709 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (52.201505115s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (52.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-235709 "pgrep -a kubelet"
I1126 21:00:41.414405    4129 config.go:182] Loaded profile config "enable-default-cni-235709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-235709 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kkhtf" [d7454403-0503-438b-8651-ec7b457d6ff8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1126 21:00:42.366060    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/auto-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-kkhtf" [d7454403-0503-438b-8651-ec7b457d6ff8] Running
E1126 21:00:49.406975    4129 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21974-2326/.minikube/profiles/flannel-235709/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003192028s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-235709 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-235709 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-938641 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-938641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-938641
--- SKIP: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-180932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-180932
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-235709 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-235709

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-235709

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-235709

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-235709

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-235709

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-235709

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-235709

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-235709

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-235709

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-235709

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-235709

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-235709" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-235709" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-235709

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235709"

                                                
                                                
----------------------- debugLogs end: kubenet-235709 [took: 4.31973323s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-235709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-235709
--- SKIP: TestNetworkPlugins/group/kubenet (4.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-235709 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-235709" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-235709

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-235709" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235709"

                                                
                                                
----------------------- debugLogs end: cilium-235709 [took: 5.31807926s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-235709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-235709
--- SKIP: TestNetworkPlugins/group/cilium (5.59s)

                                                
                                    
Copied to clipboard